report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
The federal government’s financial condition and long-term fiscal outlook present enormous challenges to the nation’s ability to respond to emerging forces reshaping American society, the United States’ place in the world, and the future role of the federal government. Over the next few decades as the baby boom generation retires and health care costs continue to rise, federal spending on retirement and health programs—Social Security, Medicare, Medicaid, and other federal pension, health, and disability programs—will grow dramatically. Absent policy changes on the spending and/or revenue sides of the budget, a growing imbalance between expected federal spending and tax revenues will mean escalating and eventually unsustainable federal deficits and debt that will threaten our future economy, standard of living, and, ultimately, our national security. Ultimately, the nation will have to decide what level of federal benefits and spending it wants and how it will pay for these benefits. GAO’s long-term simulations illustrate the magnitude of the fiscal challenges associated with an aging society and the significance of the related challenges the government will be called upon to address. Indeed, the nation’s long-term fiscal outlook is daunting under many different policy scenarios and assumptions. For instance, under a fiscally restrained scenario, if discretionary spending grew only with inflation over the next 10 years and all existing tax cuts expire when scheduled under current law, spending for Social Security and health care programs would grow to consume over 80 percent of federal revenue by 2040. (See fig. 1.) On the other hand, if discretionary spending grew at the same rate as the economy in the near term and if all tax cuts were extended, by 2040 federal revenues may just be adequate to pay only some Social Security benefits and interest on the growing federal debt. (See fig. 2.) Addressing the projected fiscal gaps shown here will require policymakers to examine the advisability, affordability, and sustainability of existing programs, policies, functions, and activities throughout the entire federal budget—spanning discretionary spending, mandatory spending, including entitlements, and tax policies and programs. Neither slowing the growth of discretionary spending nor allowing tax cuts to expire—nor both options combined—would by themselves eliminate our long-term fiscal imbalance. Additional economic growth is critical and will help to ease the burden, but the projected fiscal gap is so great that it is wholly unrealistic to expect that we will grow our way out of the problem. The President’s 2007 budget released last week included some proposals to reduce the growth in Medicare spending. Whether or not these proposals are adopted, they should serve to raise public awareness of the importance of health care costs to both today’s budget and tomorrow’s. This could also serve to jump start discussion about appropriate ways to control a major driver of our long-term fiscal outlook—health care spending. Clearly, tough choices will be required. Changes in existing budget processes and financial, fiscal, and performance metrics will be necessary to facilitate these choices. Early action to change existing programs and policies would yield the highest fiscal dividends and provide a longer period for prospective beneficiaries to make adjustments in their own planning. The longer we wait, the more painful and difficult the choices will become and the less transition time we will have. By waiting, an important window is lost during which today’s relatively large workforce can increase saving and begin preparing for the necessary changes in fiscal policy, Social Security, and health care as well as other reforms that may be necessary parts of the solution to this coming fiscal crunch. However, the long-term challenge is fast becoming a short-term one as the retirement of the baby boomers’ generation will begin as early as 2008 and since overall workforce growth has already begun to slow. While our long-term fiscal imbalance cannot be eliminated with a single strategy, reducing the tax gap is one approach that could help address the looming fiscal challenges facing the nation. The tax gap is an estimate of the difference between the taxes—including individual income, corporate income, employment, estate, and excise taxes—that should have been timely and accurately paid and what was actually paid for a specific year. The estimate is an aggregate of estimates for the three primary types of noncompliance: (1) underreporting of tax liabilities on tax returns; (2) underpayment of taxes due from filed returns; and (3) nonfiling, which refers to the failure to file a required tax return altogether or timely. Estimates for each type of noncompliance include estimates for some or all of the five types of taxes that IRS administers. IRS develops its tax gap estimates by measuring the rate of taxpayer compliance—the degree to which taxpayers fully and timely complied with their tax obligations. That rate is then used, along with other data and assumptions, to estimate the dollar amount of taxes not timely and accurately paid. For instance, IRS most recently estimated that for tax year 2001, 83.7 percent of owed taxes were paid voluntarily and timely, which translated into an estimated gross tax gap of $345 billion. IRS developed these estimates using compliance data collected through the National Research Program (NRP). Using its recently collected compliance data, IRS has estimated that underreporting of income represented over 80 percent of the tax gap for 2001 (an estimated $285 billion out of a gross tax gap estimate of $345 billion), as indicated in table 1. Within the underreporting estimate, IRS attributed about $197 billion, or about 57 percent of the total tax gap, to individual income tax underreporting, including underreporting of business income, such as sole proprietor, informal supplier, and farm income (about $109 billion); nonbusiness income, such as wages, interest, and capital gains (about $56 billion); overstated credits (about $17 billion); and overstated income adjustments, deductions, and exemptions (about $15 billion). Underreporting of corporate income tax contributed an estimated $30 billion, or about 10 percent, to the 2001 tax gap, which included both small corporations (those reporting assets of $10 million or less) and large corporations (those reporting assets of over $10 million). Employment tax underreporting accounted for an estimated $54 billion, or about 16 percent, of the 2001 tax gap and included several taxes that must be paid by self-employed individuals and employers. Self-employed individuals are generally required to calculate and remit Social Security and Medicare taxes to the U.S. Treasury each quarter. Employers are required to withhold these taxes from their employees’ wages, match these amounts, and remit withholdings to Treasury at least quarterly. Underreported self-employment and employer-withheld employment taxes, respectively, contributed an estimated $39 billion and $14 billion to IRS’s tax gap estimate. The employment tax underreporting estimate also includes underreporting of federal unemployment taxes (about $1 billion). Taxpayers who do not file their tax returns on time or at all and otherwise do not pay their tax liabilities accounted for the remainder of the 2001 tax gap—around $61 billion. For example, nonfiling and underpayment noncompliance by individual taxpayers alone contributed an estimated $48 billion to this portion of the tax gap. IRS has concerns with the certainty of the overall tax gap estimate in part because some areas of the estimate rely on old data and IRS has no estimates for other areas of the tax gap. For example, IRS used data from the 1970s and 1980s to estimate underreporting of corporate income taxes and employer-withheld employment taxes. For large corporate income tax underreporting, IRS based its estimate on the amount of tax recommended from operational examinations rather than the tax ultimately assessed as part of the total tax liability. According to IRS officials, IRS relies on the amount of tax recommended because it is difficult to determine the true tax liability of large corporations due to complex and ambiguous tax laws that create opportunities for differing interpretations and that complicate the determination. These officials further stated that because these examinations are not randomly selected and are not focused on identifying all tax noncompliance, the estimate produced from the examination data is not representative of the tax gap for all large corporations. They also explained that due to these complexities and the costs and burdens of collecting complete and accurate data, IRS has not systematically measured large corporation tax compliance through statistically valid studies, even though the officials acknowledged that such studies would be useful in estimating the related tax gap. IRS has no estimates for corporate income, employment, and excise tax nonfiling or for excise tax underreporting. For these types of noncompliance, IRS maintains that the data are either difficult to collect, imprecise, or unavailable. In addition, it is inherently difficult for IRS to observe and measure some types of underreporting or nonfiling, such as tracking cash payments that businesses make to their employees, as businesses and employees may not report these payments to IRS in order to avoid paying employment and income taxes, respectively. IRS’s overall approach to reducing the tax gap consists of improving service to taxpayers and enhancing enforcement of the tax laws. Recently, IRS has taken a number of steps that may improve its ability to reduce the tax gap. Favorable trends in staffing of IRS enforcement personnel; examinations performed through correspondence, as opposed to more complex face-to-face examinations; and the use of some enforcement sanctions such as liens and levies are encouraging. Also, IRS has made progress with respect to abusive tax shelters through a number of initiatives and recent settlement offers that have resulted in billions of dollars in collected taxes, interest, and penalties. In addition, IRS has successfully prosecuted a number of taxpayers who have committed criminal violations of the tax laws. Given its persistence and size, we need not only to consider expanding current approaches but also explore new legislation to help IRS in reducing the tax gap. Although IRS has made a number of changes in its methodologies for measuring the tax gap over the past three decades, which makes comparisons difficult, regardless of methodology the voluntary compliance rate that underpins the gap has tended to range from around 81 percent to around 84 percent. Thus, although the dollar amounts of the tax gap have changed, IRS has consistently reported a persistent, relatively stable portion of the taxes that should have been timely and accurately paid were not paid. As we have reported in the past, closing the entire tax gap may not be feasible nor desirable, as it could entail more intrusive recordkeeping or reporting than the public is willing to accept or more resources than IRS is able to commit. However, given its size, even small or moderate reductions in the net tax gap could yield substantial returns, which could improve the government’s fiscal position. For example, based on IRS’s most recent estimate, each 1 percent reduction in the net tax gap would likely yield nearly $3 billion annually. Thus, a 10 percent to 20 percent reduction of the net tax gap would translate into from roughly $30 billion to $60 billion in additional revenue annually. However, reducing the tax gap will be challenging and it must be attacked on multiple fronts and with multiple strategies, some of which follow. A critical step toward reducing the tax gap is to understand the sources and nature of taxpayer noncompliance. Regularly measuring compliance, including the reasons why taxpayers are not compliant, can offer many benefits, including helping IRS identify new or growing types of noncompliance, identify changes in tax laws and regulations that may improve compliance, understand the effectiveness of its programs to promote and enforce compliance, more effectively target examinations of tax returns, and determine its resource needs and allocations. Likewise, regularly measuring compliance can provide IRS with information against which to set goals for improving compliance and measure progress in achieving such goals. In our July 2005 report on reducing the tax gap, we made recommendations to IRS to develop plans to periodically measure tax compliance; take steps to improve its data on the reasons why taxpayers do not comply; and establish long-term, quantitative goals for voluntary compliance levels with an initial focus on individual income tax underreporting and total tax underpayment. Taken together, these steps can help IRS build a foundation to understand how its taxpayer service and enforcement efforts affect compliance and make progress on reducing the tax gap. The Commissioner of Internal Revenue agreed with our recommendations, highlighted challenges associated with them, and commented on various steps IRS would take to implement each recommendation. We are encouraged that according to IRS’s Fiscal Year 2007 Congressional Budget Justification, IRS has recently established a voluntary compliance goal, with a target of 85 percent voluntary compliance by 2009, and plans to periodically measure progress against this goal. Efforts to simplify the tax code and otherwise alter current tax policies may help reduce the tax gap by making it easier for individuals and businesses to understand and voluntarily comply with their tax obligations. Among the many causes of tax code complexity is the growing number of preferential provisions in the tax code, such as exemptions and exclusions from taxation, deductions, credits, deferral of tax liability, and preferential tax rates. Tax expenditures—as they are known by statute—can be a tool to further some federal goals and objectives, such as financing higher education or funding research and development. However, their aggregate number contributes to the complexity that taxpayers face in doing their taxes and planning their financial decisions. As figure 3 shows, the number of tax expenditures reported by the Department of the Treasury has more than doubled since 1974. Figure 4 shows the Revenue Loss Estimates for the Five Largest Tax Expenditures Reported for Fiscal Year 2005. The multiple tax preferences for education assistance illustrate the consequences of the proliferation of tax expenditures. In our July 2005 report on postsecondary tax preferences, we found that hundreds of thousands of taxpayers do not appear to make optimal decisions when selecting education-related tax preferences. One explanation of these taxpayers’ choices may be the complexity of postsecondary tax preferences, which experts have commonly identified as difficult for tax filers to use. Also, many argue that complexity creates opportunities for tax evasion, through vehicles such as tax shelters. Simplification may reduce opportunities for taxpayers to avoid taxes through the creation of complex and abusive tax shelters. Another area of the tax system that may deserve additional exploration, although not directly related to the tax gap, is whether the federal income- based tax system is sustainable and administrable in a global economy and how we should tax the income of U.S. multinational corporations that is earned outside of the United States. Every year, U.S.-based multinational corporations transfer hundreds of billions of dollars of goods and services between their affiliates in the United States and their foreign subsidiaries. Such transactions may be a part of normal business operations for corporations with foreign subsidiaries. However, it is generally recognized that given the variation in corporate tax rates across countries, an incentive exists for corporations with foreign subsidiaries to reduce their overall tax burden by maximizing the income they report in countries with low income tax rates, and minimizing the income they report in or repatriate to countries with high income tax rates. Various studies have suggested that U.S.-based multinational corporations appear to engage in transactions such as these that shift income from their affiliates in high-tax countries to subsidiaries in low-tax countries to take advantage of the differences in tax rates in foreign countries. The growth in multinational corporate transactions and structures has also introduced increasing complexity in administering the tax code. The loss of highly skilled technical employees at IRS who can examine compliance issues arising from globalization, such as transfer pricing, underscores the challenge that IRS faces in ensuring it has sufficient staff with adequate skills to address these complex issues. Providing quality services to taxpayers is an important part of any overall strategy to improve compliance and thereby reduce the tax gap. One method of improving compliance through service is to educate taxpayers about confusing or commonly misunderstood tax requirements. For example, if the forms and instructions taxpayers use to prepare their taxes are not clear, taxpayers may be confused and make unintentional errors. One method to ensure that forms and instructions are sufficiently clear is to test them before use. However, we reported in 2003 that IRS had tested revisions to only five individual forms and instructions from July 1997 through June 2002, although hundreds of forms and instructions had been revised in 2001 alone. In terms of enforcement, IRS will need to use multiple strategies and techniques to identify and deter noncompliance. As figure 5 shows, one pair of tools have been shown to lower levels of noncompliance— withholding tax from payments to taxpayers and having third parties report information to IRS and the taxpayers on income paid to taxpayers. For example, banks and other financial institutions provide information returns (Forms 1099) to account holders and IRS showing the taxpayers’ annual income from some types of investments. Similarly, most wages, salaries, and tip compensation are reported by employers to employees and IRS through Form W-2. Findings from NRP indicate that around 98.8 percent of these types of income are accurately reported on individual returns. In the past, we have identified a few specific areas where additional withholding or information reporting requirements could serve to improve compliance: Requiring tax withholding and more or better information return reporting on payments made to independent contractors. Past IRS data have shown that independent contractors report 97 percent of the income that appears on information returns, while contractors that do not receive these returns report only 83 percent of income. We have also identified other options for improving information reporting for independent contractors, including increasing penalties for failing to file required information returns, lowering the $600 threshold for requiring such returns, and requiring businesses to separately report on their tax returns the total amount of payments to independent contractors. IRS’s Taxpayer Advocate Service recently recommended allowing independent contractors to enter into voluntary withholding agreements. Requiring information return reporting on payments made to corporations. Unlike payments made to sole proprietors, payments made to corporations for services are generally not required to be reported on information returns. IRS and GAO have contended that the lack of such a requirement leads to lower levels of compliance for small corporations. Although Congress has required federal agencies to provide information returns on payments made to contractors since 1997, payments made by others to corporations are generally not covered by information returns. The Taxpayer Advocate Service has recommended requiring information reporting on payments made to corporations, and the Administration, in its fiscal year 2007 budget, has proposed requiring additional information reporting on certain goods and service payments by federal, state, and local governments. Requiring more data on information returns dealing with capital gain income. Past IRS studies have indicated that much of the noncompliance associated with capital gains is a result of taxpayers overstating an asset’s “basis,” the amount of money originally paid for the asset. Currently, financial institutions are required to report the sales prices, but not the purchase prices, of stocks and bonds on information returns. Without information on purchase prices, IRS cannot use efficient and effective computer-matching programs to check for compliance and must use much more costly means to examine taxpayer returns in order to verify capital gain income. The Taxpayer Advocate Service has recommended requiring financial institutions to track cost basis information and report it to IRS and taxpayers. Although withholding and information reporting are highly effective in encouraging compliance, such additional requirements generally impose costs and burdens on the businesses that must implement them. However, continued reexamination of opportunities to expand information reporting and tax withholding could increase the transparency of the tax system. Opportunities to expand information reporting and tax withholding could be especially relevant toward improving compliance in areas that are particularly complex or challenging to administer, such as with net income and losses passed through from “flow-through” entities such as S corporations and partnerships to their shareholders and partners. Another enforcement tool that can potentially deter noncompliance is the use of penalties for filing inaccurate or late tax and information returns. Congress has placed a number of civil penalty provisions in the tax code. However, as with civil penalties related to other federal agencies, inflation may have weakened the deterrent effect of IRS penalties. For example, the Treasury Inspector General for Tax Administration has noted that the $50 per partner per month penalty for a late-filed partnership tax return, established by Congress in 1978, would equate to $17.22 in 2004 dollars. In its fiscal year 2007 budget, the administration has proposed expanding penalty provisions applicable to paid tax return preparers to include non- income tax returns and related documents. In addition, Congress recently increased certain penalties related to tax shelters and other tax evasion techniques. Given Congress’s recent judgment that some tax penalties were too low and concerns that inflation may have weakened the effectiveness of the civil penalty provisions in the tax code, additional increases may need to be considered to ensure that all penalties are of sufficient magnitude to deter tax noncompliance. Leveraging technology to improve IRS’s capacity to receive, process, and utilize taxpayer returns could help IRS better determine how to allocate its resources to reduce the tax gap and would seem to be a prudent investment. IRS has invested heavily in modernizing its technology and those investments have paid off. Telephone service has improved and taxpayers are much more likely to get through to IRS and obtain assistance from IRS than before IRS upgraded its technology. Further, electronic filing has grown substantially. Tax information submitted to IRS electronically enables faster, more accurate processing and quicker interactions between IRS and taxpayers. Electronically filed returns are processed as they are received, therefore giving IRS access to more timely and accurate tax information, which can be used for better data analysis capability and quicker focus on issues that need resolution. IRS estimates it saves $2.15 on every individual tax return that is processed electronically. According to IRS data, electronic filing has allowed IRS to use more than a 1,000 fewer staff years to process paper returns, resources that can then be dedicated to other service or enforcement work. However, IRS’s Business Systems Modernization project, through which the agency is modernizing its outdated technology, is far from complete. IRS needs to continue to strengthen management of this effort and make prudent technology investments to maximize the efficiencies that can be gained in IRS operations and services to taxpayers. Sound resource allocation is another tool for addressing the tax gap. The more effectively IRS can allocate its resources, the more progress should result. The new NRP data, for example, are to be used to better identify which tax returns to examine so that fewer compliant taxpayers are burdened by unnecessary audits and IRS can increase the amount of noncompliance that is addressed through its enforcement activities. As part of its attempt to make the best use of its enforcement resources, given budget constraints, IRS has developed rough measures of return on investment in terms of tax revenue that is directly assessed from uncovering noncompliance. Developing such measures is difficult because of incomplete information on all the costs and all the tax revenue ultimately collected from specific enforcement efforts, as well as on the indirect tax revenues generated when current enforcement actions prompt voluntary compliance improvements in the future. Continuing to develop the return on investment measures could help officials make more informed decisions about allocating resources, particularly during periods of budget constraints. Even with better data, however, officials will need to make judgments that take into account intangibles, such as how to achieve an equitable enforcement presence across the various taxpayer groups. Our nation’s fiscal imbalance and challenges have created an imprudent and unsustainable path that needs to be addressed. While our long-term fiscal imbalance is too large to be corrected by one strategy, reducing the tax gap can help address the looming fiscal challenges. Collecting the billions of dollars that already should be paid, for example, would help ease the many difficult decisions that need to be made about our spending programs as well as the rest of the tax system. However, the tax gap itself has been large and pervasive over the years and therefore, reducing the gap will not only require expansions of current efforts, but also new and innovative solutions. While IRS takes the lead in continuing to find ways to significantly reduce the tax gap, support from Congress will be essential since legislation will likely be needed to implement many of the tax gap reduction ideas offered today. We look forward to continuing to work with Congress and IRS on these issues. Chairman Gregg, Senator Conrad and members of the committee, this concludes my testimony. I would be happy to answer any questions you may have at this time. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information on this testimony, please contact Michael Brostek on (202) 512-9110 or brostekm@gao.gov. Individuals making key contributions to this testimony include Tom Short, Assistant Director; Jeff Arkin; Elizabeth Fan; and Cheryl Peterson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Internal Revenue Service's (IRS) most recent estimate of the difference between what taxpayers timely and accurately paid in taxes and what they owed was $345 billion. IRS estimates it will eventually recover some of this tax gap, resulting in an estimated net tax gap of $290 billion. The tax gap arises when taxpayers fail to comply with the tax laws by underreporting tax liabilities on tax returns; underpaying taxes due from filed returns; or nonfiling, which refers to the failure to file a required tax return altogether or in a timely manner. The Chairman and Ranking Minority Member of the Senate Committee on the Budget asked GAO to present information on the causes of and possible solutions to the tax gap. This testimony addresses the nature and extent of the tax gap and the significance of reducing the tax gap, including some steps that may assist with this challenging task. For context, this testimony also addressed GAO's most recent simulations of the long-term fiscal outlook and the need for a fundamental reexamination of major spending and tax policies and priorities. Our nation's fiscal policy is on an imprudent and unsustainable course. As long-term budget simulations by GAO show, over the long term we face a large and growing structural deficit due primarily to known demographic trends, rising health care costs, and lower federal revenues as a percentage of the economy. GAO's simulations indicate that the long-term fiscal challenge is too big to be solved by economic growth alone or by making modest changes to existing spending and tax policies. Rather, a fundamental reexamination of major policies and priorities will be important to recapture our future fiscal flexibility. Underreporting of income by businesses and individuals accounted for most of the estimated $345 billion tax gap for 2001, with individual income tax underreporting alone accounting for $197 billion, or over half of the total gap. Corporate income tax and employment tax underreporting accounted for an additional $84 billion of the gap. Reducing the tax gap would help improve fiscal sustainability. Given the tax gap's persistence and size, it will require considering not only options that have been previously proposed but also new administrative and legislative actions. Even modest progress would yield significant revenue; each 1 percent reduction would likely yield nearly $3 billion annually. Reducing the tax gap will be a challenging long-term task, and progress will require attacking the gap with multiple strategies over a sustained period. These strategies could include efforts to regularly obtain data on the extent of, and reasons for, noncompliance; simplify the tax code; provide quality service to taxpayers; enhance enforcement of tax laws by utilizing enforcement tools such as tax withholding, information reporting, and penalties; leverage technology; and optimize resource allocation. |
Some context for my remarks is appropriate. The threat of terrorism was significant throughout the 1990s; a plot to destroy 12 U.S. airliners was discovered and thwarted in 1995, for instance. Yet the task of providing security to the nation’s aviation system is unquestionably daunting, and we must reluctantly acknowledge that any form of travel can never be made totally secure. The enormous size of U.S. airspace alone defies easy protection. Furthermore, given this country’s hundreds of airports, thousands of planes, tens of thousands of daily flights, and the seemingly limitless ways terrorists or criminals can devise to attack the system, aviation security must be enforced on several fronts. Safeguarding airplanes and passengers requires, at the least, ensuring that perpetrators are kept from breaching security checkpoints and gaining access to secure airport areas or to aircraft. Additionally, vigilance is required to prevent attacks against the extensive computer networks that FAA uses to guide thousands of flights safely through U.S. airspace. FAA has developed several mechanisms to prevent criminal acts against aircraft, such as adopting technology to detect explosives and establishing procedures to ensure that passengers are positively identified before boarding a flight. Still, in recent years, we and others have often demonstrated that significant weaknesses continue to plague the nation’s aviation security. Our work has identified numerous problems with aspects of aviation security in recent years. One such problems is FAA’s computer-based air traffic control system. The ATC system is an enormous, complex collection of interrelated systems, including navigation, surveillance, weather, and automated information processing and display systems that link hundreds of ATC facilities and provide information to air traffic controllers and pilots. Failure to adequately protect these systems could increase the risk of regional or nationwide disruption of air traffic—or even collisions. In five reports issued from 1998 through 2000, we pointed out numerous weaknesses in FAA’s computer security. FAA had not (1) completed background checks on thousands of contractor employees, (2) assessed and accredited as secure many of its ATC facilities, (3) performed appropriate risk assessments to determine the vulnerability of the majority of its ATC systems, (4) established a comprehensive security program, (5) developed service continuity controls to ensure that critical operations continue without undue interruption when unexpected events occur, and (6) fully implemented an intrusion detection capability to detect and respond to malicious intrusions. Some of these weaknesses could have led to serious problems. For example, as part of its Year 2000 readiness efforts, FAA allowed 36 mainland Chinese nationals who had not undergone required background checks to review the computer source code for eight mission-critical systems. To date, we have made nearly 22 recommendations to improve FAA’s computer security. FAA has worked to address these recommendations, but most of them have yet to be completed. For example, it is making progress in obtaining background checks on contractors and accrediting facilities and systems as secure. However, it will take time to complete these efforts. Control of access to aircraft, airfields, and certain airport facilities is another component of aviation security. Among the access controls in place are requirements intended to prevent unauthorized individuals from using forged, stolen, or outdated identification or their familiarity with airport procedures to gain access to secured areas. In May 2000, we reported that our special agents, in an undercover capacity, obtained access to secure areas of two airports by using counterfeit law enforcement credentials and badges. At these airports, our agents declared themselves as armed law enforcement officers, displayed simulated badges and credentials created from commercially available software packages or downloaded from the Internet, and were issued “law enforcement” boarding passes. They were then waved around the screening checkpoints without being screened. Our agents could thus have carried weapons, explosives, chemical/biological agents, or other dangerous objects onto aircraft. In response to our findings, FAA now requires that each airport’s law enforcement officers examine the badges and credentials of any individual seeking to bypass passenger screening. FAA is also working on a “smart card” computer system that would verify law enforcement officers’ identity and authorization for bypassing passenger screening. The Department of Transportation’s Inspector General has also uncovered problems with access controls at airports. The Inspector General’s staff conducted testing in 1998 and 1999 of the access controls at eight major airports and succeeded in gaining access to secure areas in 68 percent of the tests; they were able to board aircraft 117 times. After the release of its report describing its successes in breaching security, the Inspector General conducted additional testing between December 1999 and March 2000 and found that, although improvements had been made, access to secure areas was still gained more than 30 percent of the time. Screening checkpoints and the screeners who operate them are a key line of defense against the introduction of dangerous objects into the aviation system. Over 2 million passengers and their baggage must be checked each day for articles that could pose threats to the safety of an aircraft and those aboard it. The air carriers are responsible for screening passengers and their baggage before they are permitted into the secure areas of an airport or onto an aircraft. Air carriers can use their own employees to conduct screening activities, but mostly air carriers hire security companies to do the screening. Currently, multiple carriers and screening companies are responsible for screening at some of the nation’s larger airports. Concerns have long existed over screeners’ ability to detect and prevent dangerous objects from entering secure areas. Each year, weapons were discovered to have passed through one checkpoint and have later been found during screening for a subsequent flight. FAA monitors the performance of screeners by periodically testing their ability to detect potentially dangerous objects carried by FAA special agents posing as passengers. In 1978, screeners failed to detect 13 percent of the objects during FAA tests. In 1987, screeners missed 20 percent of the objects during the same type of test. Test data for the 1991 to 1999 period show that the declining trend in detection rates continues. Furthermore, the recent tests show that as tests become more realistic and more closely approximate how a terrorist might attempt to penetrate a checkpoint, screeners’ ability to detect dangerous objects declines even further. As we reported last year, there is no single reason why screeners fail to identify dangerous objects. Two conditions—rapid screener turnover and inadequate attention to human factors—are believed to be important causes. Rapid turnover among screeners has been a long-standing problem, having been identified as a concern by FAA and by us in reports dating back to at least 1979. We reported in 1987 that turnover among screeners was about 100 percent a year at some airports, and according to our more recent work, the turnover is considerably higher. From May 1998 through April 1999, screener turnover averaged 126 percent at the nation’s 19 largest airports; 5 of these airports reported turnover of 200 percent or more, and one reported turnover of 416 percent. At one airport we visited, of the 993 screeners trained at that airport over about a 1-year period, only 142, or 14 percent, were still employed at the end of that year. Such rapid turnover can seriously limit the level of experience among screeners operating a checkpoint. Both FAA and the aviation industry attribute the rapid turnover to the low wages and minimal benefits screeners receive, along with the daily stress of the job. Generally, screeners are paid at or near the minimum wage. We reported last year that some of the screening companies at 14 of the nation’s 19 largest airports paid screeners a starting salary of $6.00 an hour or less and, at 5 of these airports, the starting salary was the then- minimum wage—$5.15 an hour. It is common for the starting wages at airport fast-food restaurants to be higher than the wages screeners receive. For instance, at one airport we visited, screeners’ wages started as low as $6.25 an hour, whereas the starting wage at one of the airport’s fast- food restaurants was $7 an hour. The demands of the job also affect performance. Screening duties require repetitive tasks as well as intense monitoring for the very rare event when a dangerous object might be observed. Too little attention has been given to factors such as (1) improving individuals’ aptitudes for effectively performing screener duties, (2) the sufficiency of the training provided to screeners and how well they comprehend it, and (3) the monotony of the job and the distractions that reduce screeners’ vigilance. As a result, screeners are being placed on the job who do not have the necessary aptitudes, nor the adequate knowledge to effectively perform the work, and who then find the duties tedious and dull. We reported in June 2000 that FAA was implementing a number of actions to improve screeners’ performance. However, FAA did not have an integrated management plan for these efforts that would identify and prioritize checkpoint and human factors problems that needed to be resolved, and identify measures—and related milestone and funding information—for addressing the performance problems. Additionally, FAA did not have adequate goals by which to measure and report its progress in improving screeners’ performance. FAA is implementing our recommendations. However, two key actions to improving screeners’ performance are still not complete. These actions are the deployment of threat image projection systems—which place images of dangerous objects on the monitors of X-ray machines to keep screeners alert and monitor their performance—and a certification program to make screening companies accountable for the training and performance of the screeners they employ. Threat image projection systems are expected to keep screeners alert by periodically imposing the image of a dangerous object on the X-ray screen. They also are used to measure how well screeners perform in detecting these objects. Additionally, the systems serve as a device to train screeners to become more adept at identifying harder-to-spot objects. FAA is currently deploying the threat image projections systems and expects to have them deployed at all airports by 2003. The screening company certification program, required by the Federal Aviation Reauthorization Act of 1996, will establish performance, training, and equipment standards that screening companies will have to meet to earn and retain certification. However, FAA has still not issued its final regulation establishing the certification program. This regulation is particularly significant because it is to include requirements mandated by the Airport Security Improvement Act of 2000 to increase screener training—from 12 hours to 40 hours—as well as expand background check requirements. FAA had been expecting to issue the final regulation this month, 2 ½ years later than it originally planned. We visited five countries—Belgium, Canada, France, the Netherlands, and the United Kingdom—viewed by FAA and the civil aviation industry as having effective screening operations to identify screening practices that differ from those in the United States. We found that some significant differences exist in four areas: screening operations, screener qualifications, screener pay and benefits, and institutional responsibility for screening. First, screening operations in some of the countries we visited are more stringent. For example, Belgium, the Netherlands, and the United Kingdom routinely touch or “pat down” passengers in response to metal detector alarms. Additionally, all five countries allow only ticketed passengers through the screening checkpoints, thereby allowing the screeners to more thoroughly check fewer people. Some countries also have a greater police or military presence near checkpoints. In the United Kingdom, for example, security forces—often armed with automatic weapons—patrol at or near checkpoints. At Belgium’s main airport in Brussels, a constant police presence is maintained at one of two glass-enclosed rooms directly behind the checkpoints. Second, screeners’ qualifications are usually more extensive. In contrast to the United States, Belgium requires screeners to be citizens; France requires screeners to be citizens of a European Union country. In the Netherlands, screeners do not have to be citizens, but they must have been residents of the country for 5 years. Training requirements for screeners were also greater in four of the countries we visited than in the United States. While FAA requires that screeners in this country have 12 hours of classroom training before they can begin work, Belgium, Canada, France, and the Netherlands require more. For example, France requires 60 hours of training and Belgium requires at least 40 hours of training with an additional 16 to 24 hours for each activity, such as X-ray machine operations, that the screener will conduct. Third, screeners receive relatively better pay and benefits in most of these countries. Whereas screeners in the United States receive wages that are at or slightly above minimum wage, screeners in some countries receive wages that are viewed as being at the “middle income” level in those countries. In the Netherlands, for example, screeners received at least the equivalent of about $7.50 per hour. This wage was about 30 percent higher than the wages at fast-food restaurants in that country. In Belgium, screeners received the equivalent of about $14 per hour. Not only is pay higher, but the screeners in some countries receive benefits, such as health care or vacations—in large part because these benefits are required under the laws of these countries. These countries also have significantly lower screener turnover than the United States: turnover rates were about 50 percent or lower in these countries. Finally, the responsibility for screening in most of these countries is placed with the airport authority or with the government, not with the air carriers as it is in the United States. In Belgium, France, and the United Kingdom, the responsibility for screening has been placed with the airports, which either hire screening companies to conduct the screening operations or, as at some airports in the United Kingdom, hire screeners and manage the checkpoints themselves. In the Netherlands, the government is responsible for passenger screening and hires a screening company to conduct checkpoint operations, which are overseen by a Dutch police force. We note that, worldwide, of 102 other countries with international airports, 100 have placed screening responsibility with the airports or the government; only 2 other countries—Canada and Bermuda—place screening responsibility with air carriers. Because each country follows its own unique set of screening practices, and because data on screeners’ performance in each country were not available to us, it is difficult to measure the impact of these different practices on improving screeners’ performance. Nevertheless, there are indications that for least one country, practices may help to improve screeners’ performance. This country conducted a screener testing program jointly with FAA that showed that its screeners detected over twice as many test objects as did screeners in the United States. Mr. Chairman, this concludes my prepared statement. I will be pleased to answer any questions that you or Members of the Committee may have. For more information, please contact Gerald L. Dillingham at (202) 512- 2834. Individuals making key contributions to this testimony included Bonnie Beckett, J. Michael Bollinger, Colin J. Fallon, John R. Schulze, and Daniel J. Semick. Responses of Federal Agencies and Airports We Surveyed About Access Security Improvements (GAO-01-1069R, Aug. 31, 2001). Aviation Security: Additional Controls Needed to Address Weaknesses in Carriage of Weapons Regulations (GAO/RCED-00-181, Sept. 29, 2000). FAA Computer Security: Actions Needed to Address Critical Weaknesses That Jeopardize Aviation Operations (GAO/T-AIMD-00-330, Sept. 27, 2000). FAA Computer Security: Concerns Remain Due to Personnel and Other Continuing Weaknesses (GAO/AIMD-00-252, Aug. 16, 2000). Aviation Security: Long-Standing Problems Impair Airport Screeners’ Performance (GAO/RCED-00-75, June 28, 2000). Computer Security: FAA Is Addressing Personnel Weaknesses, But Further Action Is Required (GAO/AIMD-00-169, May 31, 2000). Security: Breaches at Federal Agencies and Airports (GAO-OSI-00-10, May 25, 2000). Combating Terrorism: How Five Foreign Countries Are Organized to Combat Terrorism (GAO/NSIAD-00-85, Apr. 7, 2000). Aviation Security: Vulnerabilities Still Exist in the Aviation Security System (GAO/T-RCED/AIMD-00-142, Apr. 6, 2000). Aviation Security: Slow Progress in Addressing Long-Standing Screener Performance Problems (GAO/T-RCED-00-125, Mar. 16, 2000). Computer Security: FAA Needs to Improve Controls Over Use of Foreign Nationals to Remediate and Review Software (GAO/AIMD-00-55, Dec. 23, 1999). FBI: Delivery of ATF Report on TWA Flight 800 Crash (GAO/OSI-99-18R, Aug. 13, 1999). Aviation Security: FAA’s Actions to Study Responsibilities and Funding for Airport Security and to Certify Screening Companies (GAO/RCED- 99-53, Feb. 25, 1999). Air Traffic Control: Weak Computer Security Practices Jeopardize Flight Safety (GAO/AIMD-98-155, May 18, 1998). Aviation Security: Progress Being Made, but Long-Term Attention Is Needed (GAO/T-RCED-98-190, May 14, 1998). Aviation Security: Implementation of Recommendations Is Under Way, but Completion Will Take Several Years (GAO/RCED-98-102, Apr. 24, 1998). Combating Terrorism: Observations on Crosscutting Issues (T-NSIAD- 98-164, Apr. 23, 1998). Aviation Safety: Weaknesses in Inspection and Enforcement Limit FAA in Identifying and Responding to Risks (GAO/RCED-98-6, Feb. 27, 1998). Aviation Security: FAA’s Procurement of Explosives Detection Devices (GAO/RCED-97-111R, May 1, 1997). Aviation Security: Commercially Available Advanced Explosives Detection Devices (GAO/RCED-97-ll9R, Apr. 24, 1997). Aviation Security: Posting Notices at Domestic Airports (GAO/RCED-97- 88R, Mar. 25, 1997). Aviation Safety and Security: Challenges to Implementing the Recommendations of the White House Commission on Aviation Safety and Security (GAO/T-RCED-97-90, Mar. 5, 1997). Aviation Security: Technology’s Role in Addressing Vulnerabilities (GAO/T-RCED/NSIAD-96-262, Sept. 19, 1996). Aviation Security: Urgent Issues Need to Be Addressed (GAO/T- RCED/NSIAD-96-251, Sept. 11, 1996). Terrorism and Drug Trafficking: Technologies for Detecting Explosives and Narcotics (GAO/NSIAD/RCED-96-252, Sept. 4, 1996). Aviation Security: Immediate Action Needed to Improve Security (GAO/T-RCED/NSIAD-96-237, Aug. 1, 1996). | A safe and secure civil aviation system is a critical component of the nation's overall security, physical infrastructure, and economic foundation. Billions of dollars and myriad programs and policies have been devoted to achieving such a system. Although it is not fully known at this time what actually occurred or what all the weaknesses in the nation's aviation security apparatus are that contributed to the horrendous events on September 11, 2001, it is clear that serious weaknesses exist in our aviation security system and that their impact can be far more devastating than previously imagined. As reported last year, GAO's review of the Federal Aviation Administration's (FAA) oversight of air traffic control (ATC) computer systems showed that FAA had not followed some critical aspects of its own security requirements. Specifically, FAA had not ensured that ATC buildings and facilities were secure, that the systems themselves were protected, and that the contractors who access these systems had undergone background checks. Controls for limiting access to secure areas, including aircraft, have not always worked as intended. GAO's special agents used fictitious law enforcement badges and credentials to gain access to secure areas, bypass security checkpoints at two airports, and walk unescorted to aircraft departure gates. Tests of screeners revealed significant weaknesses as measured in their ability to detect threat objects located on passengers or contained in their carry-on luggage. Screening operations in Belgium, Canada, France, the Netherlands, and the United Kingdom--countries whose systems GAO has examined--differ from this country's in some significant ways. Their screening operations require more extensive qualifications and training for screeners, include higher pay and better benefits, and often include different screening techniques, such as "pat-downs" of some passengers. |
In 2002, after more than 15 years of scientific investigation, Congress approved the Yucca Mountain site in Nevada as a suitable location for the development of a long-term permanent repository for high-level nuclear waste. DOE is responsible for developing and operating the repository, and NRC is responsible for licensing the repository. DOE is currently preparing an application to submit to NRC by December 2004 for a license to construct the repository. To obtain a license, DOE must, among other things, demonstrate to NRC that the repository will not exceed Environmental Protection Agency health and safety standards over a 10,000-year period. An ineffective quality assurance program runs the risk of introducing unknown errors into the design and construction of the repository that could lead to adverse health and safety consequences. To demonstrate compliance with the health standards over this 10,000-year period, DOE must rely primarily on a “performance assessment” computer model that incorporates over 1,000 data sources, approximately 60 scientific models, and more than 400 computer software codes to simulate the performance of the repository. Given the prominence of computer modeling in the licensing of the repository, one of DOE’s most important tasks is to demonstrate the adequacy of the data, models, and software used to perform the simulation. In addition, as part of the licensing process, DOE must demonstrate that its quality assurance program can effectively identify and correct deficiencies in areas important to the safe operation and long-term performance of the repository, such as the natural and engineered barriers of the repository and the program’s data, models, and software. See appendix I for more information on the role of quality assurance in the licensing process. DOE has a long-standing history of attempting to correct quality assurance problems. In 1988, we identified significant problems with the quality assurance program, noting that NRC had identified many specific concerns about the Yucca Mountain program, including DOE’s heavy reliance on contractors and inadequate oversight would increase the likelihood that DOE might encounter quality-related problems; the possibility that Nevada would contest the licensing proceedings, thereby increasing the probability that DOE would have to defend its quality assurance program; additional expense and time-consuming delays to correct program weaknesses if DOE could not properly defend the quality of its work; and DOE staff’s and contractors’ negative attitude toward quality assurance. Since the late 1990s, DOE has attempted to correct continuing quality assurance problems in three areas critical to the repository’s successful performance: the adequacy of the data sources, the validity of scientific models, and the reliability of computer software that have been developed at the site. These problems surfaced in 1998 when DOE began to run the initial versions of its performance assessment model. Specifically, DOE was unable to ensure that critical project data had been properly collected and tracked back to original sources. In addition, the department lacked a standardized process for developing scientific models used to simulate a variety of geologic events and an effective process for ensuring that computer software used to support the scientific models will work properly. DOE implemented actions in 1999 to correct these deficiencies and prevent their recurrence. In 2001, similar deficiencies associated with models and software resurfaced. DOE attributed the recurrence to ineffective procedures and corrective actions, improper implementation of quality procedures by line managers, and personnel who feared reprisal for expressing quality concerns. To ensure that it adequately addressed the problems to prevent future recurrence, DOE developed a more comprehensive corrective action plan in July 2002, concentrating on actions needed to address the causes of the recurring problems while improving the organizational culture and instilling a strong commitment to quality in all project personnel. The plan detailed specific actions for both DOE and its contractor, Bechtel/SAIC Company, LLC (Bechtel), to strengthen the roles, responsibilities, accountability, and authority of project personnel; develop clearer quality assurance requirements and processes; improve program procedures; create an improved programwide corrective action process; and improve processes for ensuring that employees can raise project concerns without fear of reprisals. DOE reports that it has implemented almost all of the actions identified in its 2002 corrective action plan; however, recent audits and assessments indicate that recurring quality assurance problems have not been corrected. In 2003, DOE conducted three audits to evaluate the effectiveness of the corrective actions taken to address recurring problems with data, models, and software. Because each audit identified additional quality assurance problems, DOE concluded that there was insufficient evidence to demonstrate that the recurring problems had been corrected. DOE recently closed the corrective action reports for data and software, but did so without determining whether corrective actions have been effective. To examine actions taken to correct some of the management weaknesses identified in the 2002 corrective action plan, DOE conducted four management assessments late in 2003. Collectively, these assessments found continuing management weaknesses that DOE had identified as root causes of the recurring problems. NRC also conducted an assessment that was issued in April 2004. NRC’s assessment noted some improvements but also found continuing weaknesses and noted that quality assurance problems could hinder the licensing process. In 2003, DOE’s audits of data, models, and software identified continuing quality problems that could impede DOE’s license application. As a result, DOE could not close corrective action reports for models and software for nearly 3 years. In a June 2003 audit, DOE found quality problems in developing and validating software. In September 2003, DOE quality assurance auditors found that some data sets were still not qualified or traceable to their sources. In October 2003, a DOE audit found continuing quality problems in model documentation and validation. DOE officials have stated that these findings represent problems with procedures and documentation and do not invalidate the technical products produced using the data, models, and software. In March 2004, DOE closed the corrective action reports for data and software but did so without evaluating the effectiveness of corrective actions—according to agency officials, they will evaluate effectiveness at a later date. DOE anticipates closing the corrective action report for models in August 2004 but also plans to do so without evaluating the effectiveness of corrective actions. In April 2003, DOE again reported significant problems similar to those originally identified in 1998 with the qualification and traceability of data sets. At the time, DOE implemented corrective actions to recheck all of its data sets to confirm that they were traceable and qualified. However, a September 2003 audit identified similar data problems and new problems in addition to those noted in the corrective action report. The audit found that some data sets did not have the documentation needed to trace them back to their sources; the critical process of data control and management was not satisfactory; and, as in 1998, faulty definitions were developed for data procedures, which allowed unqualified data to be used. In addition, DOE found that overall compliance with procedures was unsatisfactory. Similarly, the April 2003 corrective action report also noted a lack of management leadership, accountability, and procedural compliance, issues which are closely related to the key improvement area of roles and responsibilities. DOE officials noted that these findings represented noncompliance with procedures, and that the procedures and processes were effective in producing defensible technical products if properly followed. As of February 2004, DOE had not finished rechecking all of its data sets or correcting problems in its data sets. However, DOE closed the corrective action report in March 2004 by making the rechecking process a continuing part of the Yucca Mountain repository’s work. The corrective action report was closed without DOE evaluating the effectiveness of the rechecking process in correcting problems with data. DOE officials stated that they plan to evaluate effectiveness at a later date. An October 2003 DOE quality assurance audit found continuing problems with the documentation and validation of models that DOE plans to use in its license application. Although auditors reported that processes were effective in producing defensible models to support the license application, they found that for some models sampled, project personnel did not properly follow model validation procedures. These problems were similar to those identified by audits conducted in 2001. Auditors compared results from the 2003 audit with actions taken to correct problems identified in 2001 and found that procedures still were not being satisfactorily implemented in the areas of model documentation and traceability, model validation, and checking and review. For example, an action was taken in 2001 to improve the self-identification of problems before issuing new model reports by allowing for sufficient scheduling time for model checking and review. However, the 2003 audit concluded that instances of new errors in model reports were evidence that the previous actions may not have been fully implemented. As a result, DOE has been unable to close the May 2001 model corrective action report for almost 3 years. DOE recently directed a team of industry experts to review its models and revise them to ensure consistency, traceability, and procedural compliance. DOE anticipates closing the corrective action report in August 2004 but will do so without conducting another audit of models to determine if corrective actions have been effective. In a June 2003 audit, DOE auditors discovered recurring software problems that could affect confidence in the adequacy of software codes. Specifically, the auditors found ineffective software processes in five areas: technical reviews, software classification, planning, design, and testing. The auditors found several of the software development problems to be similar to previously identified problems, indicating that previous actions were ineffective in correcting the problems. For example, auditors again noted instances of noncompliance with software procedures. They also concluded that technical reviews during software development were inadequate, even though documentation indicated that corrective actions for this condition had been completed 3 months before the 2003 audit. Auditors also noted poorly defined roles and responsibilities as a cause of problems identified in the technical review of software, even though DOE had taken actions under its 2002 corrective action plan to clarify roles and responsibilities. Because of these results, DOE was unable to close the June 2001 software corrective action report. DOE employed a team of industry professionals in the fall of 2003 to examine software quality problems identified from 1998 through 2003. The professionals’ February 2004 report concluded that software problems recurred because DOE did not assess the effectiveness of its corrective actions and did not adequately identify the root causes of the problems. In a January 2004 follow-up audit of software, auditors verified that unqualified software was used to run approved models, and noted that procedural controls for determining the adequacy of software were inadequate. In March 2004, without evaluating the effectiveness of corrective actions, DOE closed the software corrective action report. DOE officials plan to evaluate the effectiveness of its corrective actions for software sometime in the future. DOE reported in the fall of 2003 that it had implemented most of the actions identified in the plan focusing on management weaknesses, but four DOE management assessments of the Yucca Mountain project completed between September and November 2003 found that some of the identified management weaknesses had yet to be properly addressed. These assessments included one requested by project management comparing DOE’s management practices at Yucca Mountain with external industry best practices, one required as an annual assessment of the adequacy and effectiveness of the quality assurance program, one requested by the project director that examined the effectiveness of selected DOE and contractor management systems, and one examining the project work environment. Collectively, these assessments identified continuing weaknesses in the areas of roles and responsibilities, quality assurance procedures, and a work environment that did not foster employee confidence in raising concerns without fear of reprisal. DOE officials stated that they are presently reviewing the findings of these assessments, and have recently initiated additional corrective actions. Three of the four management assessments conducted late in 2003 identified significant continuing problems with the delineation and definition of roles and responsibilities for carrying out the quality assurance program. In its 2002 corrective action plan, DOE stated that it was not possible to build accountability into management without clearly and formally defining roles and responsibilities for DOE and its contractors. DOE’s planned actions included clarification of roles and responsibilities within DOE and Bechtel through policy statements, communications, a new program manual, and realignment of the organization to support performance accountability. DOE reported that it had completed all corrective actions in this area by May 2003. The assessments noted that these actions had resulted in some improvements, but that some management weaknesses remained. The assessments found that the Yucca Mountain project lacked formal mechanisms for defining and communicating roles and responsibilities that meet both DOE and NRC requirements; did not have a systematic process for assigning authorities to DOE and Bechtel organizations and individuals; relied on program managers who had not fully assumed ownership and responsibility for quality assurance; lacked formal control of documents outlining roles and responsibilities, ensuring that they reflect the organization; lacked clear reporting relationships between the project and supporting had not adequately established processes for reviewing procedures had few systematic and effective approaches in place for assigning accountability to individuals and organizations; and did not effectively plan and communicate reorganizations and assign appropriate authority levels, in the opinion of many project employees. As a result of findings from these assessments, DOE is pursuing further corrective actions. For example, DOE plans to formally control the high- level document that defines its organizational structure. Also, Bechtel has initiated a management system improvement project, which includes issuing a new document defining roles and responsibilities. DOE officials expect that roles and responsibilities will continue to be a challenge in the future, but that efforts will continue. Three of the four management assessments identified continuing problems with project procedures, one of the areas of management weaknesses addressed by the 2002 corrective action plan. Although the assessments noted that DOE and Bechtel had made improvements in the procedure management system and DOE had reportedly reviewed existing procedures, issued new or revised procedures, and ensured that personnel using the procedures were properly trained, the assessments noted that procedures were overly prescriptive, procedures did not cover all required processes, and continuing noncompliance with procedures remained a problem. Although DOE completed actions under the 2002 plan to revise project procedures, DOE has initiated further corrective actions, including a plan to again revise Yucca Mountain project procedures by June 2005. Three of the four assessments identified continuing problems with efforts by DOE and Bechtel to ensure a work environment in which employees can freely raise concerns without fear of reprisal—one of the key areas of management weaknesses identified in the corrective action plan. DOE and Bechtel implemented corrective actions to improve the work environment by revising and expanding policies, modifying DOE contracts to require implementation of program requirements, decreasing the backlog of employee concerns, and providing programwide training that is based on industry practices. However, the assessments revealed continuing problems with the work environment, including both DOE’s and Bechtel’s employee concerns programs, which provide personnel with an opportunity to formally raise concerns about the project outside the normal chain-of-command without fear of reprisal. Appendix II describes the requirements of the Yucca Mountain employee concerns programs. Although the assessments noted ongoing management actions to strengthen the implementation of the concerns programs, they also noted that neither DOE nor Bechtel have effectively controlled corrective actions under the employee concerns programs, sometimes closing cases on the basis of anticipated actions; both DOE and contractor employee concerns programs are not being utilized to their fullest; there is a general lack of employee confidence in reporting safety issues DOE and Bechtel have not made effective resources available for determining the root causes of problems identified; DOE and Bechtel have not established a climate of trust despite communication mechanisms and messages; and a majority of DOE and contractor employees either do not consider the project’s corrective action process to be effective or are not sure of its effectiveness. Although the plan’s actions to improve the work environment were completed in November 2003, DOE plans to take additional actions to improve employee confidence in raising issues without fear of reprisal. NRC has commented on DOE’s lack of progress in making improvements to the quality assurance program. At an April 2003 management meeting with DOE, an NRC official commented that the quality assurance program had not produced the outcomes necessary to ensure that the program is compliant with NRC requirements. In response, DOE outlined the steps it was taking to ensure that its license application would meet NRC expectations for completeness, accuracy, and compliance with quality assurance requirements. The steps included additional actions to improve performance in five areas: license application, procedural compliance, the corrective action program, the work environment, and accountability. In October 2003, DOE reported to NRC that it had completed some of the actions and was making progress in the remaining open action items. While NRC officials noted that DOE’s actions might enhance performance, they found that significant implementation issues persist. NRC officials stated that they were seeking evidence of incremental DOE progress in the implementation of the quality assurance program in order to gain confidence in the adequacy of data, models, and software supporting the potential license application. In a November 2003 management meeting with DOE, NRC officials expressed encouragement with DOE’s progress in implementing an improved corrective action process and the continued performance of effective audits and the identification of areas for improvement. However, the NRC staff continued to express concerns with DOE’s lack of progress in correcting repetitive quality problems with models and software. “…if DOE continues to use its existing policies, procedures, methods, and practices at the same level of implementation and rigor, the license application may not contain information sufficient to support some technical positions in the application. This could result in a large volume of requests for additional information in some areas which could extend the review process, and could prevent NRC from making a decision regarding issuing a construction authorization to DOE within the time required by law.” DOE cannot formally assess the overall effectiveness of its 2002 corrective action plan because the performance goals to assess management weaknesses in the plan lack objective measurements and time frames for determining success. For example, the goals do not specify the amount of improvement expected, how quickly the improvement should be achieved, or how long the improvement should be sustained before the problems can be considered corrected. For example, whereas 1 goal calls for a decreasing trend in the average time needed to make revisions in procedures, it does not specify the desired amount of the decrease, the length of time needed to achieve the decrease, or how long the decrease must be sustained. DOE recently developed a management tool to measure overall project performance that includes more than 200 performance indicators with supporting goals, including 17 goals linked to the 13 goals included in the 2002 corrective action plan. These 17 goals specify the desired amount of improvement, but most still lack the time frames needed for achieving and sustaining the goals. DOE officials told us they intend to use this performance measurement tool to track the progress of the project, including actions taken under the 2002 corrective action plan. A DOE independent review of the corrective action plan completed in March 2004 found that the corrective actions from the 2002 plan to address management weaknesses have been fully implemented. However, the review also noted the effectiveness of corrective actions under the plan could not be evaluated because many of the goals in the performance measurement tool that are linked to the 2002 plan lacked the level of objectivity and testing needed to measure effectiveness. DOE’s 2002 plan included 13 goals to be used to determine the effectiveness of the corrective actions that addressed the five areas of management weaknesses. However, these goals were poorly defined, thus limiting DOE’s ability to evaluate the effectiveness of actions taken. Both GAO and the Office of Management and Budget (OMB) have stated that performance goals need to be measurable, and time frames need to be established in order to track progress and demonstrate that deficiencies have been corrected. Of the 13 goals in the corrective action plan, 3 indicated how much improvement was expected. For example, 1 of the 3 goals specified that the number of significant quality problems self- identified by program managers should be at least 80 percent of all significant quality problems, including those identified by program managers, quality assurance auditors, or other employees. In contrast, 1 of the other 10 goals called for the achievement of a decreasing trend in the time needed for revising procedures, but did not specify how much of a decrease is expected. Further, none of the 13 goals specified the length of time needed to reach and maintain the desired goal to demonstrate that the actions taken were effective. For example, the goal calling for self- identified significant quality problems to be at least 80 percent of all significant quality problems did not indicate the length of time needed to achieve the goal or how long this goal should be sustained in order to demonstrate effectiveness. DOE does not intend to revise the goals of the 2002 corrective action plan to include quantifiable measures and time frames. Without such quantifiable measures to determine whether a goal has been met, and without a specified time for the goal to be maintained, DOE cannot use these goals to determine the effectiveness of the actions taken. DOE’s recent efforts to improve performance measurement have not allowed it to adequately measure the effectiveness of its corrective action plan. DOE has developed a projectwide performance measurement tool to assess project performance that includes over 200 performance indicators with supporting goals related to the project. At our request, Bechtel was able to link 17 of the supporting goals to 12 of the 13 goals of the 2002 corrective action plan. Although these linked goals improved quantifiable measurement for 11 of the plan’s goals by specifying the amount of improvement expected, most did not include the necessary time frames for meeting the goals and sustaining the desired performance. DOE officials stated that this tool was not specifically tailored to evaluate the corrective action plan’s effectiveness, but that they have decided to use it in lieu of the original 13 goals to monitor improvements and progress in correcting the management weaknesses identified in the plan. Table 1 provides a comparison of the supporting goals in the performance tool with the 2002 corrective action plan goals. DOE has recently assessed the implementation of corrective actions, but it has not yet assessed the effectiveness of these actions in correcting recurring problems. In December 2003, DOE outlined the approach it used to determine whether corrective actions have been implemented. This approach is part of the overall process described in the 2002 action plan— appendix III provides an overview of the action plan and the status of the process. To determine if corrective actions had been implemented, DOE relied on the collective judgment of project managers regarding how effectively they have incorporated corrective actions into their regular project activities. A March 2004 DOE review analyzed the implementation of corrective actions for each of the management weaknesses but was not able to evaluate the effectiveness of the corrective actions. DOE’s March 2004 review noted strong management commitment to improvement and described recent actions taken to ensure that work products meet quality objectives for a successful license application. However, the review identified continuing weaknesses in DOE’s ability to determine the effectiveness of the actions it has taken. The review team attempted to measure how effectively DOE had met each of the plan’s original 13 goals. The team was unable to measure whether 10 of the 13 goals had been met, but concluded that the project had met 2 of the goals and made progress toward another goal, based on an analysis of trends in quality problems identified. However, these conclusions were not based on an evaluation of quantifiable goals with time frames for meeting and sustaining the desired performance. The review also concluded that the performance indicators developed to evaluate the success of the actions lacked the level of objectivity and testing needed to measure effectiveness and that some lacked the data needed to assess effectiveness. The review recommended that DOE continue its corrective actions and refine performance indicators so that the effectiveness of corrective actions in meeting the plan’s goals can be more readily measured. In April 2004, DOE notified NRC that it had completed, validated, and independently assessed the commitments it made in the 2002 corrective action plan, institutionalized the corrective actions, and established a baseline to foster and sustain continuous improvement. DOE officials stated they have achieved the initial goals of the 2002 plan through these actions. These officials indicated they would continue to refine and improve project tools used to evaluate the effectiveness of corrective actions. However, because of the limitations noted in its March 2004 review, DOE has not yet evaluated the effectiveness of corrective actions. Despite working nearly 3 years to address recurring quality assurance problems, recent audits and assessments have found that problems continue with data, models, and software, and that management weaknesses remain. As NRC has noted, quality assurance problems could delay the licensing process. Despite recurring quality problems, DOE has recently closed the corrective action reports for data and software and intends to close the corrective action report for models in August 2004 without first evaluating the effectiveness of the corrective actions taken to address the problems in these areas. DOE also does not intend to improve the goals of the 2002 plan associated with management weaknesses so that they can be adequately measured. Instead, DOE continues to plan and implement further actions to correct its quality problems and management weaknesses. This approach provides no indication regarding when DOE may be in a position to show that corrective actions have been successful. Entering into the licensing phase of the project without resolving the recurring problems could impede the application process, which at a minimum could lead to time-consuming and expensive delays while weaknesses are corrected and could ultimately prevent DOE from receiving authorization to construct a repository. Moreover, recurring problems could create the risk of introducing unknown errors into the design and construction of the repository that could lead to adverse health and safety consequences. Because of its lack of evidence that its actions have been successful, DOE is not yet in a position to demonstrate to NRC that its quality assurance program can ensure the safe construction and long-term operation of the repository. To better evaluate the effectiveness of management actions in correcting recurring quality problems, we recommend that the Secretary of Energy direct the Director, Office of Civilian Radioactive Waste Management, to revise the performance goals in the 2002 action plan to include quantifiable measures of the performance expected and time frames for achieving and maintaining this expected level of performance and close the 2002 plan once sufficient evidence shows that the recurring quality assurance problems and management weaknesses that are causing them have been successfully corrected. We provided a draft of this report to DOE and NRC for their review and comments. DOE’s written comments, which are reproduced in appendix IV, expressed disagreement with the report’s findings and recommendations. DOE commented that the report did not properly acknowledge improvements the department has made in the quality assurance program; failed to properly characterize the 2002 Management Improvement Initiatives as a “springboard” to address management issues; did not consider DOE’s use of the full range of performance indicators related to quality assurance; and mischaracterized the results of several independent, external reviews, taking a solely negative view of the findings. We disagree with most of DOE’s comments. Our draft report acknowledged that DOE has taken a number of actions to address past problems in the quality assurance program, but to ensure clarity on this point, we have incorporated additional language to this effect in the report. However, our primary focus for this review was to evaluate the effectiveness of DOE’s corrective actions in addressing the recurring quality problems. Despite the many actions taken to improve the quality assurance program, the management weaknesses and quality problems with data, models, and software have continued, indicating that the corrective actions have not been fully effective. Regarding DOE’s views on our treatment of the 2002 Management Improvement Initiatives, DOE itself characterized the initiative as a “comprehensive corrective action plan.” DOE stated that the implementation of the plan has been successful based on the evidence that responsible managers have taken agreed-upon action. This approach can be misleading, however, because it does not incorporate a determination of whether these actions have been effective. In fact, DOE has not evaluated the effectiveness of these actions in solving recurring problems. DOE further stated that we did not consider the full range of performance indicators related to quality assurance that DOE uses to manage the project. We agree. We asked DOE staff to compare their new performance indicators to the goals in the 2002 plan, and those are the goals that we presented for comparison in table 1 of our report. A discussion of the remainder of the hundreds of other goals was beyond the scope of our review and would not have added to an understanding of the overall problems with DOE’s goals. Finally, we disagree with DOE’s comment that we mischaracterized the results of recent independent reviews. We noted instances in these reports where improvements were found. However, we also devoted appropriate attention to evidence in these reports that address whether DOE’s corrective actions have been effective. As our report states, these reports consistently found that these actions have not yet had their intended effect. In NRC’s written comments, reproduced in appendix V, the agency agreed with our conclusions but suggested that DOE be given the flexibility to choose alternative approaches to achieve and measure quality assurance program performance. We agree that alternative approaches could be used to measure performance; however, to ensure the success of any approaches, DOE must include objective measurements and time frames for reaching and sustaining desired performance and include an end point for closing out the corrective action plan. To assess the status of DOE’s corrective actions to resolve recurring quality problems, we reviewed audits and deficiency reports written by the program over the past 5 years that identified problems with data, models, and software. We did not independently assess the adequacy of data, models, and software, but rather relied on the results of the project’s quality assurance audits. In addition, we reviewed numerous documents that NRC prepared as part of its prelicensing activities at Yucca Mountain, including observations of quality assurance audits, NRC on-site representative reports, and correspondence between NRC and DOE on quality matters. We also observed an out-briefing of a quality assurance audit to obtain additional knowledge of how quality problems are identified and reported. To document the status of actions taken, we reviewed evidence used by DOE’s Office of Civilian Radioactive Waste Management to prove corrective actions had been implemented and interviewed officials with DOE, at the Yucca site and in headquarters, and officials with Bechtel, the primary contractor. We also reviewed the results of four DOE assessments completed in the fall of 2003 that included the quality assurance program, interviewing the authors of the assessment reports to obtain a clear understanding of the problems identified. We attended quarterly meetings held between DOE and NRC to discuss actions taken under the plan and met with representatives of the State of Nevada Agency for Nuclear Projects and with representatives of the Nuclear Waste Technical Review Board, which was established to advise DOE on scientific and technical aspects of the Yucca Mountain project. To determine the adequacy of DOE’s plan to measure the effectiveness of the actions it has taken, we examined the July 2002 corrective action plan and subsequent project performance measurement documents to determine how DOE intended to use goals and performance measures to evaluate the plan’s effectiveness. We asked Bechtel officials to assist us in identifying and matching performance goals in the projectwide performance measurement tool with those in the 2002 corrective action plan. We compared DOE’s approach in its corrective action plan and subsequent projectwide tool with GAO and OMB guidance on performance measurement. We discussed the implementation of the corrective action plan and methods for measuring its effectiveness with various DOE and NRC officials and DOE contractors in Washington, D.C., and at the Yucca Mountain project office in Las Vegas, Nevada. We also interviewed other GAO personnel familiar with performance measurement to more fully understand the key elements needed for effective assessments. We will send copies of this report to the appropriate congressional committees, the Secretary of Energy, and the Chairman of the Nuclear Regulatory Commission. We will also make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at www.gao.gov. If you or your staffs have any questions about this report, please call me on (202) 512-3841. Major contributors to this report are listed in appendix VI. After the Department of Energy (DOE) submits its license application to the Nuclear Regulatory Commission (NRC), NRC will review it to determine whether all NRC requirements have been met and whether the repository is likely to operate safely as designed. NRC’s review will be guided by its Yucca Mountain Review Plan, which NRC developed to ensure the quality, uniformity, and consistency of NRC reviews of the license application and of any requested amendments. The review plan is not a regulation, but does contain the licensing criteria contained in federal regulations. DOE’s application is to include general, scientific, and administrative information contained in two major sections: (1) a general information section that provides an overview of the engineering design concept for the repository and describes aspects of the Yucca Mountain site and its environs that influence repository design and performance, and (2) a detailed safety analysis section that provides a review of compliance with regulatory performance objectives that are based on permissible levels of radiation doses to workers and the public, established on the basis of acceptable levels of risk. The general information section covers such topics as proposed schedules for construction, receipt, and emplacement of waste; the physical protection plan; the material control and accounting program; and a description of site characterization work. The detailed safety analysis is the major portion of the application and includes DOE’s detailed technical basis for the following areas: the repository’s safety performance before permanent closure in 100 to the repository’s safety performance in the 10,000 years after permanent closure, on the basis of the “performance assessment” computer model; a research and development program describing safety features or components for which further technical information is required to confirm the adequacy of design and engineered or natural barriers; a performance confirmation program that includes tests, experiments, and analyses that evaluate the adequacy of information used to demonstrate the repository’s safety over thousands of years; and administrative and programmatic information about the repository, such as the quality assurance program, records and reports, training and certification of personnel, plans for start-up activities, emergency planning, and control of access to the site. After DOE submits the license application (currently planned for December 2004), NRC plans to take 90 days to examine the application for completeness to determine whether DOE has addressed all NRC requirements in the application. One of the reviews for completeness will include an examination of DOE’s documentation of the quality assurance program to determine if it addresses all NRC criteria. These criteria include, among other things, organization, design and document control, corrective actions, quality assurance records, and quality audits. If it deems any part of the application incomplete, NRC may either reject the application or require that DOE furnish the necessary documentation before proceeding with the detailed technical review of the application. If it deems the application complete, NRC will docket the application, indicating its readiness for a detailed technical review. Once the application is docketed, NRC will conduct a detailed technical review of the application over the next 18 months to determine if the application meets all NRC requirements, including the soundness of scientific analyses and preliminary facility design, and the NRC criteria established for quality assurance. If NRC discovers problems with the technical information used to support the license application, it may conduct specific inspections to determine the extent and effect of the problem. Because the data, models, and software used in modeling repository performance are integral parts of this technical review, quality assurance plays a key role since it is the mechanism used to verify the accuracy of the information DOE presents in the application. NRC may conduct inspections of the quality assurance program if technical problems are identified that are attributable to quality problems. According to NRC, any technical problems and subsequent inspections could delay the licensing of the repository or, in a rare instance, lead to ultimate rejection of the application. NRC will hold public hearings chaired by its Atomic Safety and Licensing Board to examine specific topics. Finally, within 3 to 4 years from the date that NRC dockets the application, NRC will make a decision to grant the application, reject the application, or grant it with conditions. Figure 1 presents the licensing process and timeline. DOE and Bechtel/SAIC Company, LLC (Bechtel), have each established an employee concerns program to allow employees to raise concerns about the work environment without fear of reprisal. NRC requires licensees to establish a safe work environment where (1) employees are encouraged to raise concerns either to their own management or to NRC without fear of retaliation and (2) employees’ concerns are resolved in a timely and appropriate manner according to their importance. DOE and contractor employees at Yucca Mountain have various means through which to raise concerns about safety, quality, or the work environment, including a corrective action program—a process in which any employee can formally cite a problem on the project, including the work environment, that needs to be investigated and corrective actions taken; a DOE or contractor employee concerns program; or filing a concern directly with NRC. NRC encourages, but does not require, licensees to establish employee concerns programs. Both the DOE and Bechtel concerns programs at Yucca Mountain have three main steps: 1. An employee notifies concerns program staff about an issue that he/she feels should be corrected, such as safety and health issues, free from harassment, retaliation, or quality assurance problems. 2. The concerns program staff documents and investigates the employee’s concern. 3. The concerns program notifies the employee of the results of the investigation and notifies management of any need for corrective actions. DOE and Bechtel each have established a communication network to allow employees to register concerns. These networks include brochures and regular newsletters on the program and numerous computer links to the program on the contractor’s intranet where employees can obtain concerns program forms on line. Recent statistics released by DOE show that most of the 97 concerns investigated by the DOE and Bechtel concerns programs in 2003 related to complaints against management. A summary of the concerns investigated in 2003 is shown in table 2. DOE has established a process for completing corrective actions associated with the 2002 corrective action plan and evaluating their effectiveness. According to this process, after managers report they have taken actions to correct management weaknesses and specific problems with models and software, a confirmation team of DOE and contractor personnel verify that the actions have been completed. Once this step is completed, DOE conducts internal and external effectiveness reviews to determine if the actions have been effective in correcting the reported conditions. After the reviews of effectiveness, the results are assessed and reported to the Director of the Office of Civilian Radioactive Waste Management (OCRWM). The director then notifies NRC officials of the results of the effectiveness reviews, and the 2002 corrective action plan is closed. Figure 2 shows the corrective action plan process and the status of each step. The following are GAO’s comments on the Department of Energy’s letter dated April 19, 2004. 1. We disagree. Our report states that the recent independent assessments have shown improvements in the key management areas identified in the 2002 corrective action plan. However, the assessments also showed that problems remain in these areas and thus the corrective actions have not yet been successful in correcting these weaknesses. DOE’s examples of progress illustrate our point regarding improperly specified goals. For example, DOE states in its comments that line management’s self-identification of conditions adverse to quality has increased approximately 100 percent in the last 15 months (as opposed to the identification of such conditions by quality assurance auditors). However, despite this seemingly dramatic increase, DOE has yet to meet its goal of line management’s self- identifying 80 percent of all quality problems. (DOE’s 100 percent increase brought them up to about 50 percent of all quality problems being self-identified by line managers.) Further, the goal continues to lack a time frame for when the 80 percent goal should be attained and for how long it should be sustained before the corrective actions can be judged successful. As our report points out, without such specificity, improvements cannot be evaluated in terms of overall success. 2. We disagree. The 2002 Management Improvement Initiatives clearly state that it was a “comprehensive corrective action plan necessary to address weaknesses in the implementation of quality assurance requirements and attain a level of performance expected of an NRC license applicant.” Contrary to DOE’s assertion, the initiative does not indicate it was a “springboard effort to address management issues and transition improvements into day-to-day line management activities.” Although the transitioning of improvements to the line is laudable, the initiative focused on implementing corrective actions and evaluating the effectiveness of the actions in correcting problems. This approach is consistent with DOE’s criteria for correcting significant conditions adverse to quality, and it is the criteria we relied on to determine whether the corrective actions specified in the initiatives were successful. 3. We agree. We did not include the full range of performance indicators (goals) that have recently been developed, and continue to change, to assess the 2002 plan. Instead, of the hundreds of indicators that are being developed to manage the project, we relied on those few that Bechtel officials told us were connected to the goals of the 2002 plan. As table 1 shows, some improvements have been made in specifying the quantitative aspects of the goals, but weaknesses continue to exist in the new goals. In fact, table 1 shows that DOE no longer has a goal in its performance tool that specifically tracks the trend in problems related to roles and responsibilities. This omission is particularly important because the area of roles and responsibilities was noted in the 2002 plan as one of the biggest sources of problems in the quality assurance process, and, as the recent assessments have found, this is an area with continuing problems. 4. We disagree. We acknowledge that these reviews found positive improvements in a number of management areas. However, we also note that continuing problems were found with management weaknesses despite all corrective actions having been implemented in 2003. 5. While DOE believes that it has achieved the objectives of the 2002 plan, it lacks evidence that its actions have been effective in addressing the management weaknesses and correcting the recurring quality problems with data, models, and software. Evaluating performance against measurable goals with time frames for meeting and sustaining the goals would provide the needed evidence. 6. The draft report that we sent to DOE for review included reviews of 9 of the 12 documents listed in the enclosure of DOE’s letter. We have since reviewed the 3 remaining documents. The information in the 3 documents did not change our assessment of DOE’s efforts to correct its quality assurance program. After full consideration of the information included in DOE’s comments, we believe that our findings are complete and our conclusions are accurate. The following is GAO’s comment on the U.S. Nuclear Regulatory Commission’s letter dated April 16, 2004. 1. We agree that alternative approaches could be used to measure performance; however, to ensure the success of any approaches, DOE must include objective measurements and time frames for reaching and sustaining desired performance and include an end point for closing out the corrective action plan. In addition to the individual named above, Robert Baney, Lee Carroll, Thomas Kingham, Chalane Lechuga, Jonathan McMurray, Judy Pagano, Katherine Raheb, Anne Rhodes-Kline, and Barbara Timmerman made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | The Department of Energy (DOE) must obtain a license from the Nuclear Regulatory Commission (NRC) to construct a nuclear waste repository at Yucca Mountain, Nevada. In licensing, a quality assurance program helps ensure that the information used to demonstrate the safety of the repository is defensible and well documented. DOE developed a corrective action plan in 2002 to fix recurring problems with the accuracy of such information. This report assesses the status of corrective actions and the adequacy of DOE's plan to measure the effectiveness of actions taken. DOE has reportedly implemented most of the actions in its 2002 corrective action plan, but recent audits and assessments have identified lingering quality problems with data, models, and software and continuing management weaknesses. Audits revealed that some data sets could not be traced back to their sources, model development and validation procedures were not followed, and some processes for software development and validation were inadequate or not followed. DOE believes these problems have not affected the technical basis of the project; however, they could adversely affect the licensing process. Recent assessments identified continuing management weaknesses in the areas of roles and responsibilities, quality assurance policies and procedures, and a work environment that did not foster employee confidence in raising concerns without fear of reprisal. NRC has acknowledged DOE's effectiveness in identifying quality problems, but recently concluded that quality problems could delay the licensing process. DOE cannot assess the effectiveness of its 2002 plan because the performance goals to assess management weaknesses lack objective measurements and time frames for determining success. The goals do not specify the amount of improvement expected, how quickly the improvement should be achieved, or how long the improvement should be sustained before the problems can be considered corrected. DOE recently developed a measurement tool that incorporates and revises some of the goals from the action plan, but most of the revised goals continue to lack the necessary time frames needed to determine whether the actions have corrected the recurring problems. A recently completed DOE review of the 2002 plan found that the corrective actions have been fully implemented. However, the review also noted the effectiveness of the actions could not be evaluated because many of the plan's goals lacked the level of objectivity and testing needed to measure effectiveness. |
The ability to find, organize, use, share, appropriately dispose of, and save records—the essence of records management—is vital for the effective functioning of the federal government. In the wake of the transition from paper-based to electronic processes, records are increasingly electronic, and the volumes of electronic records produced by federal agencies are vast and rapidly growing, providing challenges to NARA as the nation’s recordkeeper and archivist. Besides sheer volume, other factors contributing to the challenge of electronic records include their complexity and their dependence on software and hardware. Electronic records come in many forms: text documents, e-mails, Web pages, digital images, videotapes, maps, spreadsheets, presentations, audio files, charts, drawings, databases, satellite imagery, geographic information systems, and more. They may be complex digital objects that contain embedded images (still and moving), drawings, sounds, hyperlinks, or spreadsheets with computational formulas. Some portions of electronic records, such as the content of dynamic Web pages, are created on the fly from databases and exist only during the viewing session. Others, such as e-mail, may contain multiple attachments, and they may be threaded (that is, related e-mail messages are linked into send–reply chains). In addition, the computer operating systems and the hardware and software that are used to create electronic documents can become obsolete. If they do, they may leave behind records that cannot be read without the original hardware and software. Further, the storage media for these records are affected by both obsolescence and decay. Media may be fragile, have limited shelf life, and become obsolete in a few years. For example, few computers today have disk drives that can read information stored on 8- or 5¼-inch diskettes, even if the diskettes themselves remain readable. Another challenge is the growth in electronic presidential records. The Presidential Records Act gives the Archivist of the United States responsibility for the custody, control, and preservation of presidential records upon the conclusion of a President’s term of office. The act states that the Archivist has an affirmative duty to make such records available to the public as rapidly and completely as possible consistent with the provisions of the act. In response to these widely recognized challenges, the Archives began a research and development program to develop a modern archive for electronic records. In 2001, NARA hired a contractor to develop policies and plans to guide the overall acquisition of an electronic records system. In December 2003, the agency released a request for proposals for the design of ERA. In August 2004, NARA awarded two firm-fixed-price contracts for the design phase totaling about $20 million—one to Harris Corporation and the other to Lockheed Martin Corporation. On September 8, 2005, NARA announced the selection of Lockheed Martin Corporation to build the ERA system. The contract with Lockheed is a cost-plus-award- fee contract with a total value through 2012 of about $317 million. As of April 2009, the life-cycle cost for ERA through March 2012 was estimated at $551.4 million; the total life-cycle cost includes not only the development contract costs, but also program management, research and development, and program office support, among other things. Through fiscal year 2008, NARA had spent about $237 million on ERA, including about $112 million in payments to Lockheed Martin. The purpose of ERA is to ensure that the records of the federal government are preserved for as long as needed, independent of the original hardware or software that created them. ERA is to provide the technology to ensure that NARA’s electronic records holdings can be widely accessed with the technology currently in use. The system is to enable the general public, federal agencies, and NARA staff to search and access information about all types of federal records, whether in NARA custody or not, as well as to search for and access electronic records stored in the system. Using various search engines, the system is to provide the ability to create and execute searches, view search results, and select assets for output or presentation. NARA currently plans to deliver ERA in five separate increments: ● Increment 1, also known as the ERA base, included functions focused on the transfer of electronic records into the system. ● Increment 2 includes the Executive Office of the President (EOP) system, which was designed to handle electronic records from the White House at the end of the previous administration. The EOP system uses an architecture based on a commercial off-the-shelf product that supplies basic requirements, including rapid ingest of records and immediate and flexible search of content. Increment 2 also includes basic case management for special access requests. ● According to NARA’s 2010 ERA expenditure plan, Increment 3 is to include new Congressional and Public Access systems. It is also to augment the base system with commercial off-the-shelf technology to increase flexibility and scalability. NARA plans to complete this increment by June 2010. ● Increments 4 and 5 are to provide additional ERA functionality, such as backup and restore functions and wider search capabilities, and provide full system functionality by 2012. NARA’s progress in developing ERA includes achieving initial operating capability for the first two of its five planned increments. However this progress came after NARA had experienced significant project delays and increased costs. NARA also deferred functions planned for Increment 1 to later increments. As we reported in 2007, the initial operating capability for Increment 1 was originally scheduled to be achieved by September 2007. However, the project experienced delays due to factors such as low productivity of contractor software programmers, difficulties in securing an acceptable contract to prepare the site that was to house the system, and problems with software integration. These delays put NARA’s initial plan to use ERA to receive the electronic presidential records of the Bush Administration in January 2009 at risk. In response, NARA and Lockheed Martin agreed to a revised schedule and strategy that called for the concurrent development of two separate systems, which could later be reintegrated into a single system: ● First, they agreed to continue development of the original system but focused the first increment on the transfer of electronic records into the system. Other initially planned capabilities were deferred to later increments, including deleting records from storage, searching item descriptions, and ingesting records redacted outside of the system. NARA now refers to this as the “base” ERA system. Initial operating capability for this increment was delayed to June 2008. ● Second, NARA conducted parallel development of a separate increment dedicated initially to receiving electronic records from the outgoing Bush Administration in January 2009. This system, referred to as the Executive Office of the President (EOP) system, uses a different architecture from that of the ERA base: it was built on a commercial product that was to provide the basic requirements for processing presidential electronic records, such as rapid ingestion of records and the ability to search content. NARA believed that if it could not ingest the Bush records in a way that supported search and retrieval immediately after the transition, it risked not being able to effectively respond to requests from Congress, the new administration, and the courts for these records—a critical agency mission. As we reported earlier this year, NARA certified that it achieved initial operating capability for Increment 1 in June 2008, following its revised plan. According to NARA’s 2010 expenditure plan, this increment cost $80.45 million to deliver, compared to a planned cost of $60.62 million. NARA also reported that it completed Increment 2 on time in December 2008 at a cost of $10.4 million (compared to a planned cost of $11.1 million). However, it was not functioning as intended because of delays in ingesting records into the system. Specifically, before the transition, NARA had estimated that the Bush electronic records would be fully ingested into EOP, where they would be available for search and retrieval, by May 2009. However, as of April 27, only 2.3 terabytes of data were fully ingested into the EOP system. This constituted about 3 percent of all Bush Administration unclassified electronic records. NARA later estimated that ingest of all 78.4 terabytes of unclassified records would not be complete until October 2009. In its recently released 2010 expenditure plan, NARA reported that the Bush records were fully ingested into EOP by September 2009. NARA officials attributed EOP ingest delays, in part, to unexpected difficulties. For example, according to NARA officials, once they started using the EOP system, they discovered that records from certain White House systems were not being extracted in the expected format. As a result, the agency had to develop additional software tools to facilitate the full extraction of data from White House systems prior to ingest into EOP. In addition, in April 2009, NARA discovered that 31 terabytes of priority data that had been partially ingested between December 2008 and January 2009 were neither complete nor accurate because they were taken from an incomplete copy of the source system. Because the records had not been ingested into the EOP system, NARA had to use other systems to respond to requests for presidential records early in 2009. As of April 24, 2009, NARA had received 43 special access requests for information on the Bush Administration. Only one of these requests used EOP for search, and no responsive records were found. To respond to 24 of these requests, NARA used replicated systems based on the software and related hardware used by the White House for records and image management. NARA’s current expenditure plan reports that after completing ingest of the Bush electronic records in September 2009, it retired the replicated systems. In fiscal 2010, NARA plans to complete Increment 3 and begin work on Increment 4. According to its 2010 expenditure plan, Increment 3 will cost $42.2 million and be completed in the fourth quarter of fiscal year 2010. It is to provide new systems for congressional records and public access, as well as improvements to the existing base system and the incorporation of several deferred functions, such as the ability to delete records and search and view their descriptions. Fiscal year 2010 work on Increment 4 is to consist primarily of early planning, analysis, and design. Despite the recent completion of the first two ERA increments, NARA faces several risks that could limit its ability to successfully complete the remaining three increments by 2012. These risks include the lack of specific plans describing the functions to be delivered in future increments, inconsistent application of earned value management (a key management technique), and the lack of a tested contingency plan for the ERA system. First, NARA’s plans for ERA have lacked sufficient detail. For several years, NARA’s appropriations statute has required it to submit an expenditure plan to congressional appropriations committees before obligating multi-year funds for the ERA program, and to, among other conditions, have the plan reviewed by GAO. These plans are to include a sufficient level and scope of information for Congress to understand what system capabilities and benefits are to be delivered, by when and at what costs, and what progress is being made against the commitments that were made in prior expenditure plans. However, several of our reviews have found that NARA’s plans lacked sufficient detail. Most recently, we reported in July that NARA’s 2009 plan did not clearly show what functions had been delivered to date or what functions were to be included in future increments and at what cost. For example, the fiscal year 2009 plan did not specifically identify the functions provided in the two completed increments. In addition, while the plan discussed the functions deferred to later increments, it did not specify the cost of adding those functions at a later time. Additionally, NARA’s 2009 plan lacked specifics about the scope of improvements planned for Increment 3. For example, it described one of the improvements as extend storage capacity but did not specify the amount of extended storage to be provided. Also, NARA’s plan did not specify when these functions will be completed or how much they would cost. NARA officials attributed the plan’s lack of specificity to ongoing negotiations with Lockheed Martin. Another risk is NARA’s inconsistent use of earned value management (EVM). NARA’s 2009 expenditure plan stated that, in managing ERA, the agency used EVM tools and required the same of its contractors. EVM, if implemented appropriately, can provide objective reports of project status, produce early warning signs of impending schedule delays and cost overruns, and provide unbiased estimates of a program’s total costs. We recently published a set of best practices on cost estimation that addresses the use of EVM. Comparing NARA’s EVM data to those practices, we determined that NARA fully addressed only 5 of the 13 practices. For example, we found weaknesses within the EVM performance reports, including contractor reports of funds spent without work scheduled or completed, and work completed and funds spent where no work was planned. In addition, the program had not recently performed an integrated cost-schedule risk analysis. This type of analysis provides an estimate of the how much the program will cost upon completion and can be compared to the estimate derived from EVM data to determine if it is likely to be sound. NARA officials attributed these weaknesses, in part, to documentation that did not accurately reflect the program’s current status. Another significant risk is the lack of a contingency plan for ERA. Contingency planning is a critical component of information protection. If normal operations are interrupted, network managers must be able to detect, mitigate, and recover from service disruptions while preserving access to vital information. Therefore, a contingency plan details emergency response, backup operations, and disaster recovery for information systems. Federal guidance recommends 10 security control activities related to contingency planning, including developing a formal contingency plan, training employees on their contingency roles and responsibilities, and identifying a geographically separate alternative processing site to support critical business functions in the event of a system failure or disruption. An internal NARA review found weaknesses in all 10 of the required contingency planning control activities for ERA. As of April 2009, NARA had plans to address each weakness, but had not yet addressed 10 of the 11 weaknesses. In addition, NARA reported that the backup and restore functions for the commercial off-the-shelf archiving product used at the ERA facility in West Virginia tested successfully, but there were concerns about the amount of time required to execute the process. In lab tests, the restore process took about 56 hours for 11 million files. This is significant because, while the backup is being performed, the replication of data must be stopped; otherwise it could bring the system to a halt. Subsequently, NARA officials stated that they have conducted two successful backups, but the restore process had not been fully tested to ensure that the combined backup and restore capability can be successfully implemented. To help mitigate the risks facing the ERA program, we previously recommended that NARA, among other things: include more details in future ERA expenditure plans on the functions and costs of completed and planned increments; ● strengthen its earned value management process following best ● develop and implement a system contingency plan for ERA. In its 2010 expenditure plan, NARA reported that it had taken action to address our recommendations. For example, NARA reported that a test of the ERA contingency plan was completed on August 5, 2009, and the plan itself finalized on September 16, 2009. We have not yet fully reviewed this plan or the results of the reported test. However, if NARA fully implements our recommendations, we believe the risks can be significantly reduced. In summary, despite earlier delays, NARA has made progress in developing the ERA system, including the transfer of Bush administration electronic records. However, future progress could be at risk without more specific plans describing the functions to be delivered and the cost of developing those functions, which is critical for the effective monitoring of the cost, schedule, and performance of the ERA system. Similarly, inconsistent use of key project management disciplines like earned value management would limit NARA’s ability to effectively manage this project and accurately report on its progress. Mr. Chairman, this concludes my testimony today. I would be happy to answer any questions you or other members of the subcommittee may have. If you or your staff have any questions about matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or pownerd@gao.gov. The other key contributor to this testimony was James R. Sweetman, Jr., Assistant Director. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since 2001, the National Archives and Records Administration (NARA) has been working to develop a modern Electronic Records Archive (ERA) system, a major information system that is intended to preserve and provide access to massive volumes of all types and formats of electronic records. The system is being developed incrementally over several years, with the first two pieces providing an initial set of functions and additional capabilities to be added in future increments. NARA plans to deploy full system functionality by 2012 at an estimated life-cycle cost of about $550 million. NARA originally planned to complete the first segment of ERA in September 2007. However, software and contracting problems led the agency and its contractor Lockheed Martin to revise the development approach. The revised plan called for parallel development of two different increments: a "base" ERA system with limited functionality and an Executive Office of the President (EOP) system to support the ingestion and search of records from the outgoing Bush Administration. GAO was asked to summarize NARA's progress in developing the ERA system and the ongoing risks the agency faces in completing it. In preparing this testimony, GAO relied on its prior work and conducted a preliminary review of NARA's fiscal year 2010 ERA expenditure plan. NARA has completed two of five planned increments of ERA, but has experienced schedule delays and cost overruns, and several functions planned for the system's initial release were deferred. Although NARA initially planned for the system to be capable of ingesting federal and presidential records in September 2007, the two system increments to support those records did not achieve initial operating capability until June 2008 and December 2008, respectively. In addition, NARA reportedly spent about $80 million on the base increment, compared to its planned cost of about $60 million. Finally, a number of functions originally planned for the base increment were deferred to later increments, including the ability to delete records and to ingest redacted records. In fiscal year 2010, NARA plans to complete the third increment, which is to include new systems for Congressional records and public access, and begin work on the fourth. GAO's previous work on ERA identified significant risks to the program and recommended actions to mitigate them. Specifically, GAO reported that NARA's plans for ERA lacked sufficient detail to, for example, clearly show what functions had been delivered to date or were to be included in future increments and at what cost. Second, NARA had been inconsistent in its use of earned value management (EVM), a project management approach that can provide objective reports of project status and early warning signs of cost and schedule overruns. Specifically, GAO found that NARA fully employed only 5 of 13 best practices for cost estimation that address EVM. Further, NARA lacked a contingency plan for ERA to ensure system continuity in the event that normal operations were disrupted. For example, NARA did not have a fully functional backup and restore process for the ERA system, a key component of contingency planning for system availability. To help mitigate these risks, GAO recommended that NARA: (1) include details in future ERA expenditure plans on the functions and costs of completed and planned increments; (2) strengthen its earned value management process following best practices; and (3) develop and implement a system contingency plan for ERA. NARA reported in its most recent expenditure plan that it had taken actions to address these recommendations. |
Of the four agencies that received over $40 billion in funding for science- related activities under the Recovery Act, DOE received the largest amount of funds. Table 1 shows Recovery Act funding, obligations, and expenditures for these agencies. Of the $35.2 billion it received under the Recovery Act for science-related projects and activities, DOE reported that it had obligated $34.6 billion (98 percent) and spent $18.9 billion (54 percent) as of September 30, 2011. This is an increase from March 10, 2011, when DOE reported that it had obligated $33.1 billion and spent $12.5 billion. Table 2 shows Recovery Act funding, obligations, and expenditures for DOE’s program offices. Our Recovery Act recommendations have focused primarily on the following four DOE programs and projects: The EECBG program, which provides grants to states, territories, tribes, and local communities for projects that improve energy efficiency, reduce energy use, and reduce fossil fuel emissions. The Office of Environmental Management, which cleans up contaminated sites across the country where decades of nuclear weapons research, development, and production left a legacy of dangerously radioactive, chemical, and other hazardous wastes. The LGP, which guarantees loans for energy projects that (1) use either new or significantly improved technologies as compared with commercial technologies already in use in the United States and (2) avoid, reduce, or sequester emissions of air pollutants or man-made greenhouse gases. The Weatherization Assistance Program, which enables low-income families to reduce their utility bills by making long-term energy- efficiency improvements to their homes by, for example, installing insulation, sealing leaks, and modernizing heating or air conditioning equipment. Table 3 shows Recovery Act funding, obligations, and expenditures for these DOE programs as of September 30, 2011. The Recovery Act provided about $3.2 billion for DOE’s EECBG, funding the program for the first time since it was authorized in the Energy Independence and Security Act (EISA) of 2007. DOE awarded this funding as follows: About $1.94 billion as formula grants to more than 2,000 local communities—including cities, counties, and tribal communities. About $767 million as formula grants to the states, five territories, and the District of Columbia. About $40 million for Administrative and Training/Technical Assistance. About $453 million through competitive grants to local communities. Our April 2011 report on the EECBG program focused on the approximately $2.7 billion awarded through formula grants. In that report, we found that more than 65 percent of EECBG funds had been obligated for three types of activities: (1) energy-efficiency retrofits (36.8 percent), which includes activities such as grants to nonprofit organizations and governmental agencies for retrofitting their existing facilities to improve energy efficiency; (2) financial incentive programs (18.5 percent), which includes activities such as rebates, subgrants, and revolving loans to promote recipients’ energy-efficiency improvements; and (3) energy-efficiency and conservation programs for buildings and facilities (9.8 percent), which includes activities such as installing storm windows or solar hot water technology. We also found that DOE did not always collect information on the various methods that recipients use to monitor contractors and subrecipients. As a result, DOE does not always know whether the monitoring activities of recipients are sufficiently rigorous to ensure compliance with federal requirements. In addition, DOE officials have experienced challenges in assessing the extent to which the EECBG program is reducing energy use and increasing energy savings. Most recipients report estimates to comply with program reporting requirements, and DOE takes steps to assess the reasonableness of these estimates but does not require recipients to report the methods or tools used to develop estimates. In addition, while DOE provides recipients with a software tool to estimate energy savings, DOE does not require that recipients use the most recent version. Based on these findings, we recommended that DOE (1) explore a means to capture information on recipients’ monitoring activities and (2) solicit information on recipients’ methods for estimating energy-related impact metrics and verify that recipients who use DOE’s estimation tool use the most recent version. DOE generally agreed with our recommendations and has taken steps to implement them. DOE took action on our first recommendation by collecting additional information related to subrecipient monitoring, in order to help ensure that they comply with the terms and conditions of the award. These changes will help improve DOE’s oversight of recipients. DOE implemented our second recommendation by making changes to the way it collects data to apply a unified methodology to the calculation of impact metrics. DOE officials also said the calculation of estimated impact metrics will now be performed centrally by DOE by applying known national standards to existing recipient-reported performance metrics. The Recovery Act provided about $6 billion to expand and accelerate cleanup activities at numerous contaminated sites across the country.This funding substantially boosted the Office of Environmental Management’s annual appropriation for cleanup, which has generally been between $6 billion and $7 billion. As of September 30, 2011, DOE had obligated all of the $6 billion in Recovery Act funding. DOE officials told us that they planned to have 92 percent of the funds spent by September 30, 2011, and DOE had expended about 88 percent (nearly $5.3 billion) by that time. As of May 2011, DOE had selected 109 projects for Recovery Act funding at 17 DOE sites in 12 states. DOE designated 80 percent of this funding to speed cleanup activities at four large sites: the Hanford Site in Washington State, Idaho National Laboratory, the Oak Ridge Reservation in Tennessee, and the Savannah River Site in South Carolina. DOE generally chose to use Recovery Act funds for cleanup projects that could be started and finished quickly. The majority of the projects selected also had existing contracts, which allowed the department to update and validate new cost and schedule targets within a short time frame. DOE generally funded four types of projects: (1) decontaminating or demolishing facilities, (2) removing contamination from soil and groundwater, (3) packaging and disposing of transuranic and other wastes, and (4) supporting the maintenance and treatment of liquid tank wastes. According to DOE officials, as of the end of May 2011, DOE had completed 28 Recovery Act projects. In July 2010, we reported that DOE has faced challenges in both managing Recovery Act projects and measuring how Recovery Act funding has affected cleanup and other goals. that one-third of Recovery Act-funded environmental cleanup projects did not meet cost and schedule targets, which DOE attributed to technical, regulatory, safety, and contracting issues. DOE took steps aimed at strengthening project management and oversight for Recovery Act projects, such as increasing project reporting requirements and placing tighter controls on when funds are disbursed to sites. By October 2010, DOE had made improvements in both cost and schedule performance. GAO, Recovery Act: Most DOE Cleanup Projects Appear to Be Meeting Cost and Schedule Targets, but Assessing Impact of Spending Remains a Challenge, GAO-10-784 (Washington, D.C., July 29, 2010). very different and potentially misleading information. Second, DOE had not yet developed a clear means of measuring how cleanup work funded by the act would affect environmental risk or the land and facilities requiring DOE cleanup. Third, it is unclear to what extent Recovery Act funding will reduce the costs of cleaning up the DOE sites over the long term. DOE’s estimate of $4 billion in life-cycle cost savings resulting from Recovery Act funding was not calculated in accordance with Office of Management and Budget’s guidance on benefit-cost analysis or DOE’s guidance on life-cycle cost analysis. Our analysis indicated that those savings could be 80 percent less than DOE estimated. Without clear and consistent measures, it will be difficult to say whether or how Recovery Act funding has affected DOE’s cleanup goals. DOE officials define footprint reduction as the “physical completion of activities with petition for regulatory approval to follow.” longer relevant since the Office of Management and Budget now requires contractor and subcontractor jobs to be reported online. In February 2009, the Recovery Act amended the LGP, authorizing DOE to also guarantee loans for some projects using commercial technologies. Projects supported by the Recovery Act must employ renewable energy systems, electric power transmission systems, or leading-edge biofuels that meet certain criteria; begin construction by the end of fiscal year 2011; and pay wages at or above market rates. The Recovery Act originally provided nearly $6 billion to cover the credit subsidy costs for projects meeting those criteria.reduction of $3.5 billion of this funding to be used for other purposes. According to our analysis of DOE data, as of September 30, 2011, DOE’s LGP had obligated about 78 percent of the remaining $2.5 billion in Recovery Act funds, leaving $552 million unobligated. The Recovery Act required that borrowers begin construction of their projects by September 30, 2011, to receive funding, and the unobligated funds expired and are no longer available to DOE. GAO-10-627. Consequently, we reported that DOE’s program management could improve its ability to evaluate and implement the LGP by implementing the following four recommendations: (1) develop relevant performance goals that reflect the full range of policy goals and activities for the program, and to the extent necessary, revise the performance measures to align with these goals; (2) revise the process for issuing loan guarantees to clearly establish what circumstances warrant disparate treatment of applicants; (3) develop an administrative appeal process for applicants who believe their applications were rejected in error and document the basis for conclusions regarding appeals; and (4) develop a mechanism to systematically obtain and address feedback from program applicants and, in so doing, ensure that applicants’ anonymity can be maintained. In response to our recommendations, DOE stated that it recognizes the need for continuous improvement to its LGP as those programs mature but neither explicitly agreed nor disagreed with our recommendations. In one instance, DOE specifically disagreed with our findings: the department maintained that applicants are treated consistently within solicitations. Nevertheless, the department stated that it is taking steps to address concerns identified in our report. For example, with regard to appeals, DOE indicated that its process for rejected applications should be made more transparent and stated that the LGP continues to implement new strategies intended to reduce the need for any kind of appeals, such as enhanced communication with applicants and allowing applicants an opportunity to provide additional data to address deficiencies DOE has identified in applications. DOE directly addressed our fourth recommendation by creating a mechanism in September 2010 for submitting feedback—including anonymous feedback—through its website. We tested the mechanism and were satisfied that it worked. We have an ongoing mandate under the 2007 Revised Continuing Appropriations Resolution to review DOE’s execution of the LGP and to report our findings to the House and Senate Committees on Appropriations. We are currently conducting ongoing work looking at the LGP, which will examine the status of the applications to the LGP’s nine solicitations and will assess the extent to which has DOE adhered to its process for reviewing loan guarantees for loans to which DOE has closed or committed. We expect to issue a report on LGP in early 2012. The Recovery Act provided $5 billion for the Weatherization Assistance Program, which DOE is distributing to each of the states, the District of Columbia, five territories, and two Indian tribes. The $5 billion in funding provided by the Recovery Act represents a significant increase for a program that has received about $225 million per year in recent years. During 2009, DOE obligated about $4.73 billion of the $5 billion in Recovery Act weatherization funding to recipients, while retaining the remaining funds to cover the department’s expenses. Initially, DOE provided each recipient with the first 10 percent of its allocated funds, which could be used for start-up activities, such as hiring and training staff, purchasing equipment, and performing energy audits of homes. Before a recipient could receive the next 40 percent, DOE required it to submit a plan for how it would use its Recovery Act weatherization funds. By the end of 2009, DOE had approved the weatherization plans of all 58 recipients and had provided all recipients with half of their funds. In our May 2010 report,buildings can improve production numbers quickly, state and local officials have found that expertise with multifamily projects is limited and that they lack the technical expertise for weatherizing large multifamily buildings. We also found that state agencies are not consistently dividing weatherization costs for multifamily housing with landlords. In addition, we found that determination and documentation of client income eligibility varies between states and local agencies and that DOE allows applicants to self-certify their income. We also found that DOE has issued guidance requiring recipients of Recovery Act weatherization funds to implement a number of internal controls to mitigate the risk of fraud, waste, and abuse, but that the internal controls to ensure local weatherization agencies comply with program requirements are applied inconsistently. we found that although weatherizing multifamily In our May 2010 report, we made eight recommendations to DOE to clarify its weatherization guidance and production targets. DOE generally concurred with the recommendations, has fully implemented two of them and taken some steps to address a third. For example, we recommended that DOE develop and clarify weatherization program guidance that considers and addresses how the weatherization program guidance is impacted by the introduction of increased amounts of multifamily units. DOE has issued several guidance documents addressing multi-family buildings that, among other things, provide guidance on conducting energy audits on multi-family units. We also recommended that DOE develop and clarify weatherization program guidance that establishes best practices for how income eligibility should be determined and documented and that does not allow the self-certification of income by applicants to be the sole method of documenting income eligibility. In response to our recommendation, DOE issued guidance that clarified the definition of income and strengthened income eligibility requirements. For example, the guidance clarified that self-certification of income would only be allowed after all other avenues of documenting income eligibility are exhausted. Additionally, for individuals to self-certify income, a notarized statement indicating the lack of other proof of income is required. Finally, DOE agreed with our recommendation that it have a best practice guide for key internal controls, but DOE officials stated that there were sufficient documents in place to require internal controls, such as the grant terms and conditions and a training module, and that because the guidance is located in on the website, a best practice guide would be redundant. Therefore, DOE officials stated that they do not intend to fully implement our recommendation. Nonetheless, DOE distributed a memorandum dated May 13, 2011, to grantees reminding them of their responsibilities to ensure compliance with internal controls and the consequences of failing to do so. We will continue to monitor DOE’s progress in implementing the remaining recommendations. We expect to issue a report on the use of Recovery Act funds for the Weatherization Assistance Program and the extent to which program recipients are meeting Recovery Act and program goals, such as job creation and energy and cost savings, as well as the status of DOE’s response to our May 2010 recommendations by early 2012. Of the over $1.4 billion Commerce received under the Recovery Act for science-related projects and activities, Commerce reported that it had obligated nearly all of it (98 percent) and spent $894 million (62 percent) as of September 30, 2011. Table 6 shows Recovery Act funding, obligations, and expenditures for Commerce. As part of our February 2010 report,Recovery Act grants from Commerce’s National Institute of Standards and Technology had to delay or recast certain scheduled engineering or construction-related activities to fully understand, assess, and comply with the Recovery Act reporting and other requirements. In contrast, Commerce’s National Oceanic and Atmospheric Administration officials said federal requirements did not impact the processing of Recovery Act acquisitions. we found that some recipients of Of the $1 billion NASA received under the Recovery Act for science- related projects and activities, NASA reported that it had obligated nearly $1 billion (100 percent) and spent $948 million (95 percent) as of September 30, 2011. Table 4 shows Recovery Act funding, obligations, and expenditures for NASA. In a March 2009 report, we found that NASA large-scale projects had experienced significant cost and schedule growth, but the agency had undertaken an array of initiatives aimed at improving program management, cost estimating, and contractor oversight. However, we also noted that until these practices became integrated into NASA’s culture, it was unclear whether funding would be well spent and whether the achievement of NASA’s mission would be maximized. In our most recent update of that report, we found that, although cost and schedule growth remained an issue, Recovery Act funding enabled NASA to mitigate the impact of cost increases being experienced on some projects and to address problems being experienced by other projects. In several cases, NASA took advantage of the funding to build additional knowledge about technology or design before key milestones. In our July 2010 report,agencies’, use and oversight of noncompetitive contracts awarded under the Recovery Act. We found that most of the funds that NASA had obligated under Recovery Act contract actions, about 89 percent, were obligated on existing contracts. We found that officials at several agencies said the use of existing contracts allowed them to obligate funds quickly. Of the funds NASA obligated for new actions, over 79 percent were obligated on contracts that were competed. We also found that NASA undertook efforts to provide oversight and transparency of Recovery Act-funded activities. For example, NASA issued guidance to the procurement community on the implementation of the Recovery Act, prohibited the commingling of funds, and increased reporting to senior management. we reviewed NASA’s, as well as other Of the $3 billion it received under the Recovery Act for projects and activities, NSF reported that it had obligated nearly all of the $3 billion (almost 100 percent) and spent $1.4 billion (46 percent) as of September 30, 2011. Table 5 shows Recovery Act funding, obligations, and expenditures for NSF. In our October 2010 report, we reviewed the effectiveness of new and expanded activities authorized by the America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science Act of 2007 (America COMPETES Act). The act authorized NSF’s Science Master’s Program, later funded by the Recovery Act. This program, along with 24 new programs and 20 existing programs, was funded to increase federal investment in basic scientific research and science, technology, engineering, and mathematics (STEM) education in the United States. The Science Master’s Program awarded 21 grants in fiscal year 2010, totaling $14.6 million. We found that evaluating the effectiveness of federal basic research and STEM education programs such as those authorized by the act can be inherently difficult. We also found that NSF was taking steps to evaluate the long-term effectiveness of their funded projects. As part of its broader initiative to pilot and review new approaches to the evaluation of its programs, NSF developed goals and metrics for activities in its education portfolio to reflect its increased expectations for evaluation of its funded projects. Chairman Broun, Ranking Member Tonko, and Members of the Subcommittee, this completes my prepared statement. As noted, we are continuing to monitor agencies’ use of Recovery Act funds and implementation of programs. I would be happy to respond to any questions you may have at this time. For further information regarding this testimony, please contact me at (202) 512-3841. Tanya Doriss, Kim Gianopoulos, Carol Kolarik, Holly Sasso, Ben Shouse and Jeremy Williams made key contributions to this testimony. Recovery Act Education Programs: Funding Retained Teachers, but Education Could More Consistently Communicate Stabilization Monitoring Issues. GAO-11-804. Washington, D.C.: September 2011. Recovery Act: Status of Department of Energy’s Obligations and Spending. GAO-11-483T. Washington, D.C.: March 17, 2011. Recovery Act: Energy Efficiency and Conservation Block Grant Recipients Face Challenges Meeting Legislative and Program Goals and Requirements. GAO-11-379. Washington, D.C.: April 2011. NASA: Assessments of Selected Large-Scale Projects. GAO-11-239SP. Washington, D.C.: March 3, 2011. Recovery Act: Opportunities to Improve Management and Strengthen Accountability over States’ and Localities’ Uses of Funds. GAO-10-999. Washington, D.C.: September 2010. Recovery Act: Contracting Approaches and Oversight Used by Selected Federal Agencies and States. GAO-10-809. Washington, D.C.: July 15, 2010. Recovery Act: Most DOE Cleanup Projects Appear to Be Meeting Cost and Schedule Targets, but Assessing Impact of Spending Remains a Challenge. GAO-10-784. Washington, D.C.: July 2010. Department of Energy: Further Actions Are Needed to Improve DOE’s Ability to Evaluate and Implement the Loan Guarantee Program. GAO-10-627. Washington, D.C.: July 2010. Recovery Act: States’ and Localities’ Uses of Funds and Actions Needed to Address Implementation Challenges and Bolster Accountability. GAO-10-604. Washington, D.C.: May 2010. Recovery Act: Increasing the Public’s Understanding of What Funds Are Being Spent on and What Outcomes Are Expected. GAO-10-581. Washington, D.C.: May 27, 2010. Recovery Act: Factors Affecting the Department of Energy’s Program Implementation. GAO-10-497T Washington, D.C.: March 4, 2010. Recovery Act: Project Selection and Starts Are Influenced by Certain Federal Requirements and Other Factors. GAO-10-383. Washington, D.C.: February 10, 2010. Recovery Act: GAO’s Efforts to Work with the Accountability Community to Help Ensure Effective and Efficient Oversight. GAO-09-672T. Washington, D.C.: May 5, 2009. American Recovery and Reinvestment Act: GAO’s Role in Helping to Ensure Accountability and Transparency for Science Funding. GAO-09-515T. Washington, D.C.: March 19, 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The American Recovery and Reinvestment Act of 2009 (Recovery Act) is intended to preserve and create jobs and promote economic recovery, among other things. The Congressional Budget Office estimated in 2011 that the Recovery Act would cost $840 billion, including more than $40 billion in science-related activities at the Department of Energy (DOE), Department of Commerce, the National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF). These activities support fundamental research, demonstrate and deploy advanced energy technologies, purchase scientific instrumentation and equipment, and construct or modernize research facilities. The Recovery Act assigned GAO with a range of responsibilities, such as bimonthly reviews of how selected states and localities used funds, including for science-related activities. This statement updates the status of science-related Recovery Act funding for DOE, Commerce, NASA, and NSF and provides the status of prior recommendations from GAO's Recovery Act reports. This testimony is based on prior GAO work updated with agency data as of September 30, 2011. As of September 30, 2011, DOE, Commerce, NSF, and NASA had obligated about 98 percent of the more than $40 billion appropriated for science-related activities identified at those agencies. They had spent $22 billion, or 54 percent of appropriated funds. DOE received the majority of this funding, and the four agencies vary in the amount of Recovery Act funds they have obligated and spent for their programs, as well as the challenges they have faced in implementing the Recovery Act. For example: 1) Loan Guarantee Program for Innovative Technologies. As of September 30, 2011, DOE had obligated about 78 percent of the nearly $2.5 billion provided for this program, which among other things guarantees loans for projects using new or significantly improved technologies as compared with commercial technologies already in use in the United States and reported spending about 15 percent of those funds. In a July 2010 report (GAO-10-627), GAO made four recommendations for DOE to improve its evaluation and implementation of the program. DOE has begun to take steps to address our recommendations but has not fully addressed them, and GAO continues to believe DOE needs to make improvements to the program. 2) Weatherization Assistance Program. As of September 30, 2011, DOE had obligated the full $5 billion of Recovery Act funding provided for the Weatherization Assistance Program, which enables low-income families to reduce their utility bills by making long-term energy-efficiency improvements to their homes, and reported spending about 72 percent of those funds. In a May 2010 report (GAO-10-604), GAO made eight recommendations to DOE to clarify guidance and production targets. To date, DOE has implemented two of those recommendations: (1) it issued guidance on multi-family buildings and (2) clarified the definition of income and strengthened income eligibility requirements. 3) Commerce, NASA, and NSF. As of September 30, 2011, Commerce, NASA, and NSF each had obligated nearly all of their science-related Recovery Act funding. Commerce spent about 62 percent, NASA spent about 95 percent, and NSF spent about 46 percent of this funding. GAO has reported several times on the use of these funds and the challenges agencies faced. In a February 2010 report (GAO-10-383), GAO found that some recipients of Commerce's Recovery Act grants faced challenges complying with Recovery Act reporting and other federal requirements and had to delay or recast certain scheduled activities as a result. In a March 2009 report (GAO-09-306SP), GAO found that NASA's large-scale projects, including those that received Recovery Act funds, had experienced significant cost and schedule delays. In a March 2011 report, (GAO-11-239SP), GAO found that Recovery Act funds allowed NASA to reduce the impact of cost increases on some projects and to address problems being experienced by others. In GAO's October 2010 report (GAO-11-127R), it found that NSF's program to increase investment in science, technology, engineering, and mathematics education took steps to evaluate the long-term effectiveness of its projects and developed goals and metrics for that evaluation. |
To identify steps Chrysler and GM have taken since December 2008 to reorganize, we reviewed information on the companies’ finances and operations, including financial statements, select documents from their bankruptcy proceedings, and company-provided data, and interviewed representatives of the companies. To determine how Treasury will monitor its financial interests in Chrysler and GM, we reviewed transaction documents related to the restructuring of Chrysler and GM that Treasury was a party to, such as the secured credit agreements and shareholders’ agreements, which set forth Treasury’s rights with regard to the companies and certain requirements the companies must comply with. We also reviewed information on Treasury’s plans for overseeing its ownership interests in the companies, including White House and Treasury press releases, and testimony statements. In addition, we interviewed officials from Treasury’s Office of Financial Stability (OFS), which was established to administer TARP, about their plans to monitor the government’s financial interests, including Treasury’s enforcement of the reporting requirements that were established for Chrysler and GM. We did not, however, independently verify the processes and procedures Treasury has established to monitor and enforce the reporting requirements. To identify important considerations for Treasury in monitoring and determining how and when to sell its equity in Chrysler and GM, we conducted a review of the academic literature on government ownership of private entities, including both domestic and international cases of private equity investments, privatization, and nationalization, and reviewed analyses of the potential future value of Chrysler and GM and Treasury’s equity stake. We also interviewed individuals with expertise in the financial condition of domestic automakers, principles of corporate restructuring, and government ownership of private entities. The financial and business experts whose opinions are represented in this report were selected from a list of experts identified for us by the National Academy of Sciences (NAS) for our earlier report on challenges facing Chrysler and GM. Of the panel of experts we interviewed for that report, we contacted a subset whose expertise was particularly relevant to structuring an exit strategy. In addition to individuals identified by NAS, we spoke with individuals NAS experts themselves identified as being knowledgeable in this area. We also added two experts with investment experience specifically in the auto industry. We chose experts in government management of investments in private companies by identifying former federal government officials who were involved in well-known cases of government assistance to private entities, such as the federal assistance provided to Chrysler in 1979. We conducted individual semistructured interviews with these individuals, both in person and by telephone. Once this review was completed, we analyzed the content of the literature and the interviews for recurring themes and summarized these common results. A list of the individuals we spoke with is provided in appendix I. We conducted this performance audit from August 2009 to November 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Treasury’s decision to provide substantial amounts of funding to the auto industry—more than 12 percent of the TARP funds authorized to date— and to accept equity in the companies as a form of repayment for a portion of the assistance reflects Treasury’s view of the importance of the industry to the financial health of the United States as a whole. The auto industry— including automakers, dealerships, and automotive parts suppliers— contributes substantially to the U.S. economy by, for example, directly employing about 1.7 million people, according to industry and government data. To help stabilize this industry and avoid economic disruptions, Treasury authorized $81.1 billion through AIFP from December 2008 through June 2009 for the following purposes. Funding to support automakers during restructuring. Treasury has provided financial assistance to Chrysler and GM to support their restructuring as they attempt to return to profitability. This assistance was provided in loans and equity investments in the companies. Auto Supplier Support Program. Under this program, Chrysler and GM received funding for the purpose of ensuring payment to suppliers. The program was designed to ensure that automakers receive the parts and components they need to manufacture vehicles and that suppliers have access to liquidity on their receivables. Warranty Commitment Program. This program was designed to mitigate consumer uncertainty about purchasing vehicles from the restructuring automakers by providing funding to guarantee the warranties on new vehicles purchased from them. Funds were provided to Chrysler and GM under this program but have been repaid in full because both were able to continue to honor consumer warranties. Funding to support automotive finance companies. Treasury has provided funding to support Chrysler Financial and GMAC LLC, financial services companies whose businesses include providing consumer financing for vehicle purchases and dealer financing for inventory. Chrysler Financial is following Treasury’s directive to liquidate its business and is planning to wind down its operations by the end of 2011. GMAC has agreed to provide Chrysler customers and dealers with financing for retail and wholesale purchases. Table 1 provides information on the funding levels Treasury authorized under AIFP, the amounts Chrysler, GM, and the finance companies have repaid, and Treasury’s plans to be repaid or otherwise compensated for the outstanding funds. Treasury officials have said the agency does not intend to provide more funding to Chrysler or GM. As a condition of the initial federal financial assistance provided in December 2008 and January 2009, the Bush Administration required that Chrysler and GM develop restructuring plans that would, among other things, identify how the companies plan to achieve and sustain long-term financial viability. President Obama rejected the restructuring plans that Chrysler and GM submitted in February 2009, and required the companies to develop more aggressive plans. After reviewing the revised plans, the President announced in April 2009 and June 2009 that the government would provide additional financial assistance to support Chrysler’s and GM’s restructuring efforts, respectively. To effectuate the restructuring plans, both companies filed voluntary petitions for reorganization under Chapter 11 of the U.S. Bankruptcy Code. Through the bankruptcy process, the newly organized Chrysler and GM purchased substantially all of the operating assets of the old companies. In June 2009 and July 2009, respectively, the new Chrysler and new GM emerged from the bankruptcy process with substantially less debt and with streamlined operations. The old companies, which retained very few assets but most of the liabilities, remain in bankruptcy, where their remaining liabilities are being dealt with. These liabilities include a portion of the loans Treasury provided to the companies prior to bankruptcy in the amounts of $5.4 billion for Chrysler and $986 million for GM. As noted, Treasury has stated that it has no plans to provide additional assistance to Chrysler and GM. Figure 1 describes other key events in the funding and restructuring of the auto companies. Since the condition of the domestic auto industry first came to the forefront of national attention in December 2008, Chrysler and GM have made changes to address key challenges to achieving viability, but the effect that these actions will have on the companies remains to be seen. As we previously reported, a number of operational and financial challenges stand in the way of Chrysler’s and GM’s return to profitability. Some of these challenges are beyond the companies’ control, such as current economic conditions and limited credit availability. However, other factors the companies can exert more control over include the companies’ debt levels, dealership networks, and production costs and capacity. Aided by substantial government assistance and bankruptcy reorganization, they have begun to address a number of these challenges. Although the companies’ restructuring efforts started before receiving government assistance under TARP, our analysis focuses on the period between first receiving TARP assistance (around the end of 2008) and after bankruptcy reorganization (June 2009 and July 2009 for Chrysler and GM, respectively). The following are some key challenges that Chrysler and GM have begun to address. Reducing debt. Through the bankruptcy process, Chrysler and GM eliminated a substantial amount of their long-term financial liabilities, including debt owed to bank lenders and bondholders. In our previous work, we discussed the importance of reducing debt for companies to achieve long-term viability. By reducing the amount the companies pay in interest expense, cash flow is improved, freeing up more money for research and development and other activities that can help the businesses prosper. The precise amount of the companies’ total debt reduction is not known because the value of some debts will not be determined until the companies’ post-reorganization accounting is complete. However, some reduced or eliminated debts whose values are known include $6.9 billion of secured bank debt owed by old Chrysler, of which $2 billion was repaid and none carried forward to new Chrysler; $5.9 billion of secured bank debt owed by old GM, substantially all of which was repaid by old GM, leaving new GM with none of this debt; substantial reductions of the companies’ monetary obligations to the trusts established to provide health care benefits to retirees of the International Union, United Automobile, Aerospace and Agricultural Implement Workers of America (UAW); and about $27 billion in unsecured GM bondholder debt and $2 billion in unsecured Chrysler obligations, which stayed as a liability of the old GM and old Chrysler, leaving new GM and new Chrysler with none of this debt. Reducing the number of brands and models. GM is reducing its North American brands from eight to four. In November 2007, Chrysler announced it would eliminate four models within its three primary brands—Chrysler, Dodge, and Jeep—and in October 2009 it announced that it would create a fourth brand by splitting the Ram brand out of the Dodge brand. As we have previously reported, advantages of reducing brands and models include eliminating costs such as factory tooling and product development, reducing intracompany competition for sales of similar models, and allowing more focus and resources on the remaining models’ quality, image, and performance. Rationalizing dealership networks to align with sales volumes. Both Chrysler and GM have made cuts to their dealership networks since year- end 2008. As we reported in April, the companies’ dealer networks were too large to be supported by recent sales levels. As of April 2009, Chrysler, Ford, and GM dealerships—most of which are independently owned and operated—were more numerous and, in general, sold half or fewer vehicles per dealership than dealerships selling vehicles from foreign automakers. Higher sales per store allow for a greater return on the dealer’s fixed costs of running the business, allowing for more investment in facilities and advertising—which ultimately benefits the automaker by improving the price for which its cars are sold. As of June 30, 2009, shortly after the new Chrysler emerged from bankruptcy, Chrysler had reduced its U.S. dealerships to 2,382, a reduction of about 28 percent from the year- end 2008 level of 3,298. As of July 2009, when the new GM emerged from bankruptcy, its number of dealers had declined to 6,039 through normal attrition, down from 6,375 at year-end 2008. GM is executing “wind-down” agreements with another approximately 1,300 dealerships and expects another 600 Saturn, Saab, or Hummer dealerships to be transferred to another manufacturer or be phased out. With additional normal attrition, GM expects to have between 3,600 and 3,800 dealerships by the end of 2010, which will represent a 44 percent reduction from 2008 year-end numbers. Reducing production costs and capacity. Both companies have made reductions in their production costs and capacity since year-end 2008, according to company-provided information. In our April report, we noted such reductions are important because the companies’ pre-reorganization cost structures were not sustainable given the decline in their sales and market shares in recent years. Table 2 shows the reductions the companies made between year-end 2008 and the dates they emerged from bankruptcy. In addition to the reductions made during these time periods, the companies implemented restructuring efforts prior to 2008 and plan additional reductions in the future. For instance, Chrysler closed two factories, reduced a number of shifts, and cut nearly 29,000 hourly, salaried, and supplemental employees between year-end 2006 and year- end 2008. GM announced in September 2009 that it will add a third shift at three U.S. assembly plants as part of a plan to close other plants to increase the efficiency of its manufacturing operations. Chrysler and GM have also reached agreements with the UAW, in accordance with the terms of the companies’ prebankruptcy loans from Treasury, which will result in further reductions in production costs. Under these terms, the companies were required to use their best efforts to reduce total compensation paid to U.S. employees, including wages and benefits, to be comparable with the total compensation Honda, Nissan, or Toyota pays to employees at their U.S. facilities. The companies were also required to use their best efforts to make changes to work rules to be comparable with the work rules of Honda’s, Nissan’s, or Toyota’s U.S. facilities. Changes the UAW agreed to as part of restructuring included cancellation of cost-of-living adjustments for current workers and restructuring of skilled trade classifications, among other things. Whether and to what extent these changes will improve Chrysler’s and GM’s profitability and long-term viability remains to be seen. Many elements of a company’s financial statements are also used in measures of financial health, but neither Chrysler nor GM has finalized new financial statements based on their reorganization. Chrysler and GM have agreed to provide certain financial information, as outlined in agreements between Chrysler and its shareholders, including Treasury, and between GM and the Securities and Exchange Commission (SEC). Consistent with the agreements, Chrysler and GM plan to complete the process of determining the fair value of the assets and liabilities transferred to the new companies for their audited 2009 year-end financial statements, which they expect to complete by April 2010 and March 2010, respectively. Chrysler will provide its 2009 audited annual financial statement to Treasury and its other shareholders, and GM will provide its 2009 audited annual financial statement to SEC, where it will also be available to the public. Chrysler will begin filing quarterly and annual financial reports with SEC beginning with its 2010 audited annual financial statements, which will be publicly available through SEC. Before audited annual financial statements are filed with SEC, Chrysler and GM will make other select information publicly available. Moreover, whether enough time has passed for the impact of the structural changes to be seen is unlikely, especially given that the automakers have not completed restructuring, the economy is still recovering, and new vehicle purchases remain at low levels. For instance, although the federal Car Allowance Rebate System program resulted in a sales spike in August, September sales returned to historically low levels. These and other challenges are likely to delay the companies’ recovery beyond what it would be under more favorable economic circumstances. Treasury, which has a sizable financial stake in Chrysler and GM, does not plan to be involved in the day-to-day management of the companies, but it has established certain requirements that will be in effect as long as it holds debt or equity in the companies. Treasury has distinct rights as both a creditor and an equity owner. Its rights as a creditor are documented in the secured credit agreements, which set forth the terms and provisions of the loans Treasury provided to new Chrysler and new GM. Its rights as an equity owner are documented in a number of transactional documents related to the formation of the new Chrysler and the new GM, including shareholders’ agreements, equity registration rights agreements, and organizational documents. Treasury’s role as an equity owner focuses on monitoring the financial health of the companies in order to protect the value of Treasury’s equity stake. Treasury developed several principles to guide its role as an equity owner, including the commitment that, although Treasury reserves the right to set up-front conditions to protect taxpayers and promote financial stability, Treasury plans to oversee its financial interests in a commercial manner, in which it will focus primarily on maximizing its return and take a hands-off approach to day-to-day management. Treasury plans to reserve its involvement for major transactions such as the sale of a controlling share of the companies. Treasury’s role as a creditor is not as clearly delineated, but much like in its role as equity owner, Treasury has said it will focus on monitoring the companies’ financial health. Conditions set by Treasury in the credit agreements include requiring that the companies comply with provisions applicable to companies receiving TARP assistance, in accordance with the Emergency Economic Stabilization Act (EESA), as well as other requirements that are specific to Chrysler and GM. According to the agreements, Chrysler and GM must do the following: Produce a portion of their vehicles in the United States. Chrysler must either manufacture 40 percent of its U.S. sales volume in the United States or its U.S. production volume must be at least 90 percent of its 2008 U.S. production volume. GM agrees to use its commercially reasonable best efforts to ensure that the volume of manufacturing conducted in the United States is consistent with at least 90 percent of the level envisioned in GM’s business plan. Comply with the executive compensation requirements of EESA. These requirements state that bonuses or incentive compensation paid to any of the senior executive officers or the next 20 most highly compensated employees based on materially inaccurate earnings must be repaid, no golden parachute payments may be made to a senior executive officer or any of the next five most highly compensated employees, compensation in excess of $500,000 per executive may not be deducted for tax purposes, and the companies must establish a compensation committee of independent directors to review employee compensation plans and the risks posed by these plans. Have an expense policy that is in compliance with TARP standards for compensation and corporate governance. The policy must govern hosting and sponsoring for conferences and events, travel accommodations and expenditures, office or facility renovations and relocations, and entertainment and holiday parties, among other things. Report to Treasury on the use of government funds. The companies are to provide Treasury with a report each quarter setting forth in reasonable detail the actual use of the TARP funding they received upon exiting from bankruptcy. Have internal controls to ensure compliance with the requirements. The companies are to promptly establish internal controls to provide reasonable assurance of compliance in all material respects with each of the credit agreement’s requirements for executive privileges and compensation, aircraft, expenses, and the Employ American Workers Act. The companies must also have documentation of these controls and the companies’ compliance with them. Report on events related to pension plans. The companies must report to Treasury if actions occur that could result in the companies failing to meet the minimum funding requirements for their pension plans, or if the companies plan to terminate any of their plans. To protect the value of its equity share and the likelihood of loan repayment, Treasury has also established requirements under which the companies must report financial information, and it intends to use this information to closely monitor the financial condition of Chrysler and GM. The financial reporting requirements are set forth in Treasury’s credit agreements with the companies and other agreements that specify the rights of the companies and their shareholders, which include Treasury and other parties. GM is also subject to additional reporting requirements related to the reserve portion of its loan from Treasury that is being held in escrow. Treasury has agreed with the companies on additional financial, managerial, and operating information, which the companies will provide in monthly reporting packages, along with items specified in the agreements. Tables 3 and 4 provide details on Chrysler’s and GM’s reporting requirements. According to Treasury officials, they plan to review and analyze the reports they receive under creditor and equity owner requirements to identify areas of concern, such as actual market share lagging behind the projected market share, an excess of inventory, or other signs that business is foundering. Treasury does not have authority to direct the companies to take specific actions to address such findings, but Treasury said it plans to notify the companies’ management and the Secretary of the Treasury if it sees any cause for concern in the financial reports. In addition to reviewing financial information, Treasury’s team of staff responsible for overseeing AIFP (subsequently referred to as the auto team) plans to meet monthly via teleconference and quarterly in person with the companies’ top management to discuss the companies’ progress against their restructuring plans. Important findings that result from the review of financial reports or management meetings will also be conveyed to key staff in OFS and other Treasury offices with responsibilities for managing TARP investments. Treasury also intends to use financial reports as a basis for decisions on how and when to sell its equity in the companies, as discussed below. While Treasury has stated that it plans to manage its investments in Chrysler and GM in a hands-off manner and will not interfere in day-to-day operations of the companies, Chrysler and GM will be subject to requirements regarding compensation, expenses, and reporting that other auto companies are not. For example, as discussed above, each company is subject to certain requirements about the vehicles it is to produce, such as the requirement to produce a portion of its vehicles in the United States. In addition, Chrysler’s shareholders, including Treasury, have agreed that Fiat’s equity stake in Chrysler will increase if Chrysler meets certain benchmarks, such as producing a vehicle that achieves a fuel economy of 40 miles per gallon or producing a new engine in the United States. Treasury officials stated that they established such up-front conditions not solely to protect Treasury’s financial interests as a creditor and equity owner but also to reflect the Administration's views on responsibly utilizing taxpayer resources for these companies. While Treasury has stated it does not plan to manage its stake in Chrysler or GM to achieve social policy goals, these requirements and covenants to which the companies are subject indicate the challenges Treasury has faced and likely will face in balancing its roles. Treasury’s general goals of exiting as soon as practicable, maximizing return on investment, and improving the strength and viability of Chrysler and GM are reasonable but possibly competing, according to the group of financial and industry experts we spoke with. For example, if Treasury sells its stake as soon as practicable, it may not maximize its return because too little time may have elapsed to demonstrate to investors the companies’ potential for future profitability. Similarly, maximizing return on investment might require actions that do not contribute to making the companies strong and viable—for example, if Chrysler or GM does not return to profitability, Treasury may need to act to liquidate the companies, with the proceeds divided among its shareholders and creditors, to maximize its return on investment. Treasury will ultimately have to address these inherent trade-offs, decide which goal is most important, and then manage its interest in a way that prioritizes that goal over others. Treasury officials told us that they have considered these trade-offs and scenarios, including the worst-case scenario of Chrysler and GM not attaining long-term viability, and that they intend to balance these competing goals when deciding when and how to exit. Treasury’s current approach for monitoring its equity in Chrysler and GM does not fully address the considerations that our group of experts identified as important. In particular: Retain necessary expertise. Experts stressed that it is critical for Treasury to employ or contract with individuals with experience managing and selling equity in private companies. Individuals with investment, equity, and capital market backgrounds should be available to provide advice and expertise on the oversight and sale of Treasury’s equity. This is crucial because prior to TARP, Treasury did not typically buy and sell stakes in private companies, so it has needed to employ appropriate personnel and to retain consultants, such as investment bankers and private equity analysts and firms, who are knowledgeable about such investment decisions. One expert we interviewed noted that housing such individuals in a program office created specifically and solely to oversee the government’s investment in the companies could be beneficial. Program staff would be devoted solely to this purpose, and staff turnover would be low so that institutional knowledge would be preserved over the life of the program. The literature also stressed the importance of designating staff to oversee equity sales. In assessing Chrysler’s and GM’s financial condition and future prospects and putting together financing packages for the companies, Treasury hired or consulted with a number of individuals with experience in investment banking, equity analysis, and the auto industry, but it has not established a program office to oversee its investment in the auto companies. As with the rest of the TARP programs, OFS oversees the investment in the auto companies. Some OFS employees work exclusively on the automotive companies, while others divide their time among multiple TARP programs. While the auto team has experienced a significant decline in its number of staff, and presently has limited engagements with outside firms with specialty expertise such as investment banking or equity analysis to assist in its management of its investment in the auto companies, Treasury officials stated that the rest of OFS is available to “backfill” as necessary and acts as a program office for Treasury’s investment in the auto industry. However, OFS is not a dedicated program office for overseeing Treasury’s investment in Chrysler and GM, in that it has responsibilities for Treasury’s investments in other companies. Treasury officials also stated that the reduction in the number of staff on the auto team has been a reflection of the team’s reduced workload now that the intensive process of restructuring the companies is over and that the size of the team required for monitoring the government’s investment is smaller than for a restructuring process. Because of the particular needs of the auto companies and the unprecedented nature of providing such assistance, Treasury hired or contracted with a number of individuals with expertise in the auto industry, equity investment, and relevant areas of law throughout the first half of calendar year 2009 as Treasury assessed Chrysler’s and GM’s financial condition, assembled financing packages for the companies, and helped with restructuring efforts. When Treasury was heavily involved in the restructuring of the companies, Treasury’s auto team consisted of 12 professional staff and 4 administrative staff, and it used the services of investment banking, consulting, and law firms. Since those agreements have been finalized and the workload has declined, two-thirds of the original professional staff has left, leaving Treasury with 4 of the original professional staff dedicated to auto issues, other OFS staff who have also helped monitor these investments, and limited use of investment or industry consultants. The leader of the auto team, who also serves as a senior adviser to the President on the auto industry, was recently appointed Senior Counselor for Manufacturing Policy, requiring him to split his time between the auto team and his new role. Moreover, Treasury officials told us that there will likely be additional staff reductions in the future because they plan to disband the auto team over time as other OFS staff assume the role of monitoring the financial condition of the companies. In commenting on a draft of this report, Treasury officials stated that in light of recent and expected staff turnover, they are prepared to hire personnel from within Treasury or externally to fill Treasury’s monitoring function. Nonetheless, given the wind-down of the auto team— and the associated loss of dedicated staff with industry- and company- specific knowledge and expertise—we are concerned that Treasury may not have sufficient expertise to actively oversee and protect the government’s ownership interests, including determining when and how to divest these interests. In general, Treasury has faced challenges hiring the full complement of staff necessary to administer the TARP programs, in part because qualified candidates can often find a more competitive salary with a financial regulator, which has the authority to establish its own compensation programs without regard to certain requirements applicable to executive branch agencies. We have reported on the importance of Treasury documenting the skills and competencies it needs to administer the program and continuing to expeditiously hire personnel. The quality of human capital policies and practices including, but not limited to, hiring affects the control environment. A strong control environment will depend, in part, on the managerial and other staff hired. Treasury has made progress in hiring staff to administer TARP duties, but Treasury officials have not formally evaluated whether the staffing level to oversee AIFP is appropriate for their current and projected needs. Officials said that they had considered future needs and determined that Treasury’s monitoring role could be achieved with fewer staff. In response to a request for documentation of their evaluation of staffing needs, Treasury provided us with a document showing the current and projected number of staff working on AIFP, but this document did not show how Treasury determined the appropriate number of staff or areas of expertise that would be needed for future workloads. In commenting on a draft of this report, Treasury officials stated that they had not had difficulty hiring qualified professionals to work on the auto team and did not anticipate having difficulties finding qualified staff in the future should the need arise for additional hiring. Monitor and communicate company, industry, and economic indicators. All of the experts we spoke with emphasized the importance of monitoring company indicators such as financial and operating performance, automotive industry-wide indicators such as vehicle sales, and broader economic indicators such as interest rates and consumer spending. Monitoring these indicators allows investors, including Treasury, to determine how well the companies, and in turn the investment, are performing in relation to the rest of the industry. It also allows an investor to determine how receptive the market would be to an equity sale, something that contributes to the price at which the investor can sell. Some experts also noted that Treasury should assign an individual with expertise in investment banking or private equity to be in charge of monitoring these metrics, which Treasury officials told us they had done. In addition to monitoring the investment, communicating a clearly articulated vision for TARP programs is important, as we have previously reported. Understanding the different TARP programs and the distinct rationale for each can be difficult for Congress, the markets, and the public, because many of the programs address specific developments and have similar guidelines and terms. Specifically for AIFP, what Treasury’s goals are for its investment in Chrysler and GM, and in turn, which indicators and metrics are necessary to determine progress in achieving these goals, is important information for Congress and the public to have. Although Treasury provides public information on activities in the TARP programs, including AIFP, through its legally mandated monthly reports to Congress, transaction reports, and others, these reports do not provide information on the indicators Treasury plans to use in assessing its goals for its auto investments. Identifying these indicators for Congress, and sharing as much of this information as possible, while still respecting the need for certain business sensitive information not to be released, could help Congress and the public better understand whether the investment in the auto companies has been successful. Treasury’s auto team plans to closely monitor the performance of Chrysler and GM by way of financial reports from the companies such as balance sheets and liquidity statements, which, in general, measure the financial health of a company at the time of the statement. It also plans to monitor industry and broader economic indicators. The auto team plans to use this information to alert Chrysler and GM management to any problematic areas in the companies, and to help determine the best time and strategy for divesting the government’s interest. Finally, Treasury officials have not informed Congress which components of the reporting package will be shared or how they plan to use the information contained in these packages to assess and monitor the companies’ performance. In commenting on a draft of this report, Treasury noted that it will not make the components of these reports public because the release of certain information could put Chrysler and GM at a competitive disadvantage, thereby harming the potential recovery of taxpayer funds. Treasury further noted that the companies will publicly report on certain financial information—similar to what publicly traded companies report—in the future. To the extent possible, determine the optimal time and method to divest. One of the key components of an exit strategy is determining how and when to sell the investment. Given the many different ways to dispose of equity—through public sales, private negotiated sales, all at once, or in batches—experts noted that the seller’s needs should inform decisions on which approach is most appropriate. For example, if an investor is interested in selling quickly but the company has not demonstrated the level of performance necessary for a successful initial public offering (IPO), in which the company first sells stock to the public, the investor should consider other sale options, such as a private sale. According to experts, a successful IPO requires that the companies show signs of earnings growth and future profitability, something that will take a considerable amount of time for Chrysler and GM, as they only recently emerged from bankruptcy. Attracting investors to the market is essential because lack of sufficient investor interest may result in depressed value of shares. Experts noted that a convergence of factors related both to financial markets and to the company itself create an ideal window for an IPO; this window can quickly open and close and cannot easily be predicted. This requires constant monitoring of up-to-date company, industry, and economic indicators when an investor is considering when and how to sell. As Treasury evaluates these indicators, considering all possible sale strategies is important. Members of the auto team said that they plan to consider indicators such as profitability and prospects, cash flow, market share, and market conditions to determine the optimal time and method of sale. The ultimate decision on when and how to sell will be made by the Secretary of the Treasury, but auto team staff will be in charge of monitoring these indicators and recommending a strategy to the Secretary and Assistant Secretary for Financial Stability. Although Treasury officials said they plan to consider all options for selling the government’s ownership stakes in Chrysler and GM, they noted that they believe the most likely scenario for GM is to dispose of Treasury’s equity in the company through a series of public offerings. Treasury has publicly discussed the possibility of selling part of its equity in the company through an IPO that would occur sometime in 2010. However, by publicly discussing a method and a time for a sale of GM shares now, the extent to which Treasury is using the indicators to inform method and timing decisions is unclear. Moreover, two of the experts we spoke with said GM might not be ready for a successful IPO by 2010, because it may be too early for the company to have demonstrated sufficient progress to attract investor interest, and two other experts noted that 2010 would be the earliest possible time for an IPO. For Chrysler, Treasury officials noted that the department is more likely to consider a private sale because its equity stake is smaller, and several of the experts we interviewed noted that non-IPO options could be possible for Chrysler, given the relatively smaller stake Treasury has in the company (9.85 percent, versus its 60.8 percent stake in GM) and the relative affordability of the company. In commenting on a draft of this report, Treasury officials stated that they were aware of the diversity of opinions on divesting the government’s interest in the auto companies and would make an appropriate determination to maximize the taxpayers’ return. To achieve the maximum return for taxpayers, Treasury also said it plans not to disclose more information about its strategy to divest its ownership interests than is necessary. Treasury officials said that on the basis of their analysis of the companies’ future profitability, they believe that Chrysler and GM will be able to attract sufficient investor interest for Treasury to sell its equity. With regard to the possibility that there may not be sufficient investor interest, Treasury officials said they would monitor the financial markets and the companies’ operations in order to identify any issues that could affect profitability, and work with the companies’ boards of directors and management to address them. In the event that the companies do not return to profitability in the time frame Treasury has projected, Treasury officials said that they will consider all commercial options for disposing of Treasury’s equity, including liquidation. Manage investments in a commercial manner. Experts emphasized the importance of Treasury resisting external pressures to focus on public policy goals over focusing on its role as a commercial investor. For example, some experts said that Treasury should not let public policy goals such as job retention interfere with its goals of maximizing its return on investment and making Chrysler and GM strong and viable companies. They said that this is especially important because making the companies financially strong and competitive may require reducing the number of employees. Nevertheless, one expert suggested that Treasury should consider public policy goals and include the value of jobs saved and other economic benefits from its investment when calculating its return, since these goals, though not important to a private investor, are critical to the economy. As long as Treasury maintains ownership interests in Chrysler and GM, it will likely be pressured to influence the companies’ business decisions. Treasury has said that it plans to manage its investment in Chrysler and GM in a commercial way. Yet Treasury faces external pressures, such as to prioritize jobs over maximizing its return. For example, Congress is currently considering a number of bills to restore automotive dealers’ contracts terminated in restructuring, and Treasury officials noted that they receive frequent calls from Members of Congress expressing concern about dealership closings. To protect Treasury’s investment from these external pressures, a recent Congressional Oversight Panel report recommended that Treasury hold its equity interests in the auto companies in a trust managed by an independent trustee. Treasury officials told us they cannot currently establish a trust managed by independent trustees because of a requirement in EESA that states that troubled assets are subject to the supervision of the Secretary of the Treasury. The officials stated that if Treasury created a trust with the assets managed by independent trustees, the Secretary would not be able to exercise his authority over the assets as required by law. Congress is considering legislation that would authorize and require the Secretary to transfer to a limited liability company all equity in TARP recipients in which the government has a certain equity interest as a result of TARP assistance. The bills further provide that the equity is to be managed in trust for the benefit of taxpayers. Treasury officials told us they believe their planned approach for managing Treasury’s equity in Chrysler and GM is sufficient for now. Regardless of the sales strategies used, the companies will have to grow substantially in order to reach values at which Treasury would recover the entirety of its equity investment upon sale of its equity, which Treasury and others consider to be unlikely. On the basis of our analysis, shown in table 5, we estimate that Chrysler and GM would need to have a market capitalization of $54.8 billion and $66.9 billion, respectively, for Treasury to earn enough on the sale of its equity to break even. A recent Congressional Oversight Panel report reached similar conclusions on what the companies would have to be worth. As a point of reference for these values, in 1997, the last year Chrysler was a publicly traded company, its market capitalization value ranged between $23.1 billion and $31.7 billion, and in 1998, when it merged with Daimler, it was valued at an estimated $37 billion. GM, at its peak in 2000, had a market capitalization of $57 billion. In commenting on a draft of this report, Treasury officials noted that the companies’ past equity values are not comparable to today’s equity values because the companies have substantially restructured their balance sheets through bankruptcy. Although we recognize the changes the companies have experienced in recent years, we believe this information provides a sense of the magnitude of growth that will be required of the companies. Treasury’s own analysis suggests that the circumstances necessary for the companies to reach market capitalizations high enough for Treasury to fully recover its equity investment are unlikely. Treasury officials also noted that considering the companies’ enterprise values—a measure of a business’s total value, including the value of equity and debt—in addition to equity value is important, because enterprise value takes into account the likelihood of repayment of loans and other obligations extended to the companies as well as the value of equity stakes. However, these estimates do not take into account other benefits and costs that are more difficult to measure, such as the impact of Treasury’s investment on jobs and local and national economies and the opportunity costs Treasury incurred in providing financial assistance. The impact on the economy is difficult to measure because, according to the Council of Economic Advisors, it involves predicting what employment and economic performance would have been without government investment. Nevertheless, a more comprehensive analysis that takes these effects into account would yield a richer picture of the value of Treasury’s net investment and net return, especially given that the government’s goal upon first providing assistance to the auto industry was to prevent economic disruption. Treasury’s substantial investment and other assistance, including loans from the Canadian government and concessions from the UAW, have contributed to the current stability of Chrysler and GM. However, because of the challenges continuing to face the auto industry—including the still recovering economy and weak demand for new vehicles—the ultimate impact that the assistance will have on the companies’ profitability and long-term viability is uncertain. Although the immediate crisis of helping Chrysler and GM maintain solvency has passed for now and Treasury has no plans for further financial assistance to the companies, the significant sums of taxpayer dollars that are invested in these companies warrant continued oversight. It is critical that Treasury remain focused on protecting the government’s interest in the coming months as Chrysler and GM work to become profitable. However, most of the original staff on Treasury’s auto team either have left Treasury or may do so in the future. Treasury officials told us that OFS personnel will continue to provide oversight. Given the substantial decline in the number of staff and lack of dedicated staff for this oversight moving forward, however, we are concerned whether Treasury will continue to have the needed expertise to provide oversight of the use of government funds, assess the financial condition of the auto companies, and develop strategies to divest the government’s interests. Monitoring industry conditions and determining when to divest will require a certain expertise, including a robust monitoring function through which detailed financial data from Chrysler and GM are reviewed on a regular basis. Transparency as to how the companies are being monitored also will be important to ensuring accountability and providing assurances that the taxpayers’ investment— including both the loans to and equity in the companies—is being appropriately safeguarded. While we recognize that not all information that the companies report to Treasury should be made public because of concerns about disclosing proprietary information in a competitive market, Treasury’s approach for evaluating the success of the AIFP should be as transparent as possible, given the large taxpayer investment. While Treasury has stated that it plans to review all possible options for divesting itself of its ownership interest in Chrysler and GM, Treasury officials have focused primarily on an IPO for GM, both in our discussions with them and in their public statements. However, given the complexity of the economy and the financial markets, considering all of the options in the context of the companies’ financial progress and current financial conditions will be important for Treasury. The past year has indicated the extent to which a company’s financial situation can change within a period as short as a few months. Given the fluidity of conditions and the number of factors that will need to be considered when determining how and when to divest, it is important that Treasury identify the criteria it will use to evaluate the optimal method and timing for selling the government’s ownership stake. Determining when and how to divest the government’s ownership stake will be one of the most important decisions Treasury will have to make regarding the federal assistance provided to the domestic automakers, as this decision will affect the overall return on investment that taxpayers will realize from aiding these companies. Currently, the value of the companies would have to grow tremendously for Treasury to approach breaking even on its investment, requiring that Treasury temper any desire to exit as quickly as possible with the need to maintain its ownership interest long enough for the companies to demonstrate sufficient financial progress. Therefore, it is important that Treasury be able to explain why and how it decided to divest when the time arrives, and clearly established criteria will help Treasury communicate this decision to Congress and the public at the appropriate time to prevent this disclosure from negatively affecting the full recovery for taxpayers. To improve the stewardship of the federal government’s substantial financial investment in the auto industry, we recommend that the Secretary of the Treasury take the following three actions: Ensure that the department has the expertise needed to adequately monitor and divest the government’s investment in Chrysler and GM, and obtain needed expertise in areas where gaps are identified. In addressing any existing or future expertise gaps, Treasury should consider both in- house and external expertise. Report to Congress on how it plans to assess and monitor the companies’ performance to help ensure the companies are on track to repay their loans and to return to profitability. In reporting to Congress, Treasury should balance the need for transparency with the need to protect certain proprietary information that would put the companies at a competitive disadvantage or negatively affect Treasury’s ability to recover the taxpayers’ investments. Develop criteria for evaluating the optimal method and timing for divesting the government’s ownership stake in Chrysler and GM. In applying these criteria, Treasury should evaluate the full range of available options, such as IPOs or private sales. We provided a draft of this report to the Department of the Treasury for review and comment. Treasury generally agreed with the report’s findings, conclusions, and recommendations, and provided written comments, which are reprinted in appendix II. Treasury also provided technical comments and clarifications via e-mail, which we incorporated as appropriate. In their technical comments, Treasury officials emphasized that they believe they have individuals within OFS who can provide the needed oversight of the government’s investments in Chrysler and GM. We added Treasury’s views on its current staffing and expertise levels to the final report. While we recognize that OFS employs a number of qualified individuals who have worked on the government’s efforts to stabilize the auto industry, we nevertheless remain concerned about the loss of industry- and company-specific knowledge and expertise that Treasury has experienced and will continue to experience with the wind- down of the auto team. Such knowledge and expertise will be critical as Treasury monitors the financial health of Chrysler and GM and develops plans to divest its ownership interests in these companies. We are pleased that Treasury—in both its written and technical comments—commits to continue to take steps to assess and maintain the expertise required to monitor and manage Treasury’s investments in these companies. In their written and technical comments, Treasury officials also stressed the need to strike a balance between the goal of transparency and the need to avoid compromising the competitive positions of Chrysler and GM or the government’s ability to recover its investments. We recognize the need to strike this balance and added language to the final report, including one of our recommendations, to acknowledge this difficult trade-off. We believe our revised recommendation that Treasury report to Congress on its plans to monitor the performance of the companies provides Treasury with sufficient flexibility to strike the appropriate balance. We also provided relevant portions of a draft of this report to SEC, Chrysler, and GM for their review and comment. SEC, Chrysler, and GM provided technical comments and clarifications that we incorporated as appropriate. We are sending copies of this report to other interested congressional committees and members, the Department of the Treasury, and others. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Katherine Siggerud at (202) 512-2834 or siggerudk@gao.gov or A. Nicole Clowers at (202) 512-2843 or clowersa@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In addition to the contacts named above, Raymond Sendejas, Assistant Director; Orice Williams Brown; Sarah Farkas; Timothy Guinane; Heather Halliwell; Terence Lam; Matthew McDonald; Susan Michal-Smith; Joshua Ormond; and Susan Sawtelle made important contributions to this report. . | The Department of the Treasury (Treasury) provided $81.1 billion in Troubled Asset Relief Program (TARP) aid to the U.S. auto industry, including $62 billion in restructuring loans to Chrysler Group LLC (Chrysler) and General Motors Company (GM). In return, Treasury received 9.85 percent equity in Chrysler, 60.8 percent equity and $2.1 billion in preferred stock in GM, and $13.8 billion in debt obligations between the two companies. As part of Government Accountability Office's (GAO) statutory responsibilities for providing oversight of TARP, this report addresses (1) steps Chrysler and GM have taken since December 2008 to reorganize, (2) Treasury's oversight of its financial interest in the companies, and (3) considerations for Treasury in monitoring and selling its equity in the companies. GAO reviewed documents on the auto companies' restructuring and spoke with officials at Treasury, Chrysler, and GM, and individuals with expertise in finance and the auto industry. Chrysler and GM have made changes since December 2008 to address key challenges to achieving viability, but the ultimate effect of these changes remains to be seen. The companies have eliminated a substantial amount of their long-term debt, reduced the number of brands and models of vehicles they sell, rationalized their dealership networks, and lowered production costs and capacities by reducing the number of factories and employees. It is difficult to fully assess the impact of these changes because of the short amount of time that has passed since reorganization and the low level of new vehicle sales. Moreover, Chrysler and GM are revaluing their assets and liabilities based on their reorganizations in 2009 and expect to prepare financial statements based on this effort in the coming months. Treasury does not plan to be involved in the day-to-day management of Chrysler and GM, but it plans to monitor the companies' performance. Treasury developed several principles to guide its role as a shareholder, including the commitment that although Treasury reserves the right to set up-front conditions to protect taxpayers and promote financial stability, Treasury will oversee its financial interests in a hands-off, commercial manner. The conditions that Treasury set for the companies include requiring that a portion of their vehicles be manufactured in the United States and that they report to Treasury on the use of the TARP funding provided. Treasury officials told us that they are also requiring that Chrysler and GM submit financial information on a regular basis and that they plan to meet with the companies' top management on a regular basis to discuss the companies' financial condition. Treasury should make certain that its current approach for monitoring and selling its equity in Chrysler and GM fully addresses all important considerations financial and industry experts identified. For example, Treasury initially hired or consulted with a number of individuals with experience in investment banking or equity analysis to help assess Chrysler's and GM's financial condition and develop financing packages for the companies. Many of these individuals have recently left as the restructuring phase of Treasury's work has been completed. Treasury will need to ensure these staff and any staff that depart in the future are replaced as needed with similarly qualified personnel. Also, Treasury does not currently contract with or employ outside firms with specialty expertise for its work with the auto industry but may need to do so in the future, to make sure sufficient expertise is available to oversee the government's significant financial interests in Chrysler and GM. In addition, although Treasury officials told us they are considering all options for divesting the government's ownership interests, including an initial public offering or private sale, they have focused primarily on a series of public offerings for GM and have not identified criteria for determining the optimal time and method to sell. Regardless of the option pursued, however, Treasury is unlikely to recover the entirety of its investment in Chrysler or GM, given that the companies' values would have to grow substantially above what they have been in the past. |
The GS pay system covered 69 percent of federal civilian workers in 2011, with compensation costing about $147 billion (about 67 percent of total federal civilian compensation of about $220 billion). The GS workforce is divided into 15 pay grades, with 10 rates of pay (referred to as steps) within each grade. Agencies use a uniform set of classification standards to determine grade levels for their positions organized within five occupational categories—Professional, Administrative, Technical, Clerical, and Other White-Collar (PATCO). The GS system of classification was established by the Classification Act of 1949 in response to calls for a modernized system to ensure equity in pay setting. Until the late 1960s, general pay adjustments for federal employees were made through acts of Congress. The Federal Pay Comparability Act of 1970 permanently authorized the President to adjust GS pay rates annually, and established a system for recommending adjustments with the goal of increasing federal pay to be comparable with the private sector; however, we previously found that the gap between average federal and private sector salaries for similar jobs continued after implementation of the act because the recommended adjustments were not always made. The Federal Employees Pay Comparability Act of 1990 (FEPCA) created annual locality-based pay adjustments for GS employees to reduce reported gaps between federal and nonfederal pay in metropolitan areas. In addition, FEPCA maintained an annual across-the-board pay adjustment that is the same for each employee to keep the GS base pay schedule in line with salary growth in the general labor market, similar to what had already existed under the 1970 act. Before FEPCA, federal employees doing the same job at the same level anywhere in the country were paid the same amount. However, there was a growing concern that it was difficult to recruit and retain skilled federal employees in areas with higher nonfederal wages. We concluded that locality-based pay adjustments were necessary. FEPCA established locality pay, and the President’s Pay Agent designated pay localities based on Office of Management and Budget (OMB) Metropolitan Statistical Areas. FEPCA’s goal was to reduce the gap between federal and nonfederal pay in each locality, as measured by BLS data and reported by the President’s Pay Agent, to 5 percent over the course of 9 years. This goal was not met, but locality pay increases have been provided every year since locality pay was implemented in 1994, except during the pay freeze in 2011 and 2012. According to OPM, locality pay is now a broadly accepted practice in federal pay administration. See app. II for more information on the implementation of locality pay. Figure 1 illustrates the extent to which locality pay has been implemented for a representative employee. The annual pay (base plus locality) in 2012 for an employee at GS-11 (approximately the midpoint grade level), step 1 is shown for selected pay localities. Examples of positions that a GS-11 employee might hold are Administrative Officer, Scientist, Paralegal Specialist, Accountant, Engineer, Medical Records Administrator, Nurse Specialist, and Information Technology Specialist. There were 34 pay localities in the United States in 2012, composed of the states of Alaska and Hawaii, 31 metropolitan areas, and a residual locality called “Rest of U.S.” that includes all other areas in the United States and its territories and possessions. Rest of U.S. was the lowest- paying locality in 2012, with a GS-11, step 1 earning $57,408, and San Francisco was the highest, with a GS-11, step 1 earning $67,963. We selected additional localities with various pay rates and population sizes and from various regions to create figure 1. Across-the-board adjustments are designed to keep the GS base pay schedule in line with salary growth in the general labor market. FEPCA specifies that unless the President provides for alternative pay adjustments, across-the-board pay adjustments are to be determined using a simple formula: Pay rates are to be increased by the 12-month percentage increase in the wage and salary component of the Employment Cost Index (ECI) for private sector workers, minus one-half of one percentage point. For example, the ECI reference period for the January 2013 increase is the 12-month period ending September 2011. The ECI shows that during that period, pay for private sector workers rose Therefore, the across-the-board increase for 2013 would by 1.7 percent.be 1.2 percent. The ECI, an index compiled by the BLS and published quarterly, measures percentage changes in wages and salaries for private sector employees. As specified in FEPCA, the President may decide to either provide across-the-board pay adjustments based on this calculation, or provide alternative pay adjustments based on national emergency or serious economic conditions affecting the general welfare. Congress may legislate an increase that is different from the formula result or the President’s alternative plan; this is not part of the process specified by FEPCA. The FEPCA formula increase has gone into effect in An 12 of the 19 years since 1994; the largest increase was 3.8 percent. In evaluating economic conditions, the President is to consider a range of economic measures, including (but not limited to) Gross National Product, the unemployment rate, the budget deficit, and the Consumer Price Index. amount lower than the formula amount went into effect in the other 7 years due to President’s alternative pay plans and laws passed by Congress. The smallest increase was 0 percent, during the freeze on annual pay adjustments in 2011 and 2012. Locality adjustments are designed to reduce the gap between federal and nonfederal pay in each locality to no more than 5 percent based on surveys to be conducted by BLS. FEPCA specifies that locality pay adjustments are to be recommended by a Pay Agent designated by the President, which is to consider the views of employee organizations: The President’s Pay Agent recommends annual comparability payment amounts, establishes and modifies pay localities as it considers appropriate, and submits an annual report to the President on these items. The Secretary of Labor and the Directors of OMB and OPM serve as the Pay Agent. In making its recommendations, the President’s Pay Agent considers the views and recommendations of a Federal Salary Council and other employee organizations. The Federal Salary Council makes annual recommendations to the President’s Pay Agent on locality pay adjustments, including the establishment or modification of pay localities, the coverage of salary surveys used to set locality pay, the process for making pay comparisons, and the level of comparability payments that should be made. The Council is to be comprised of three experts in labor relations and pay policy and six representatives of employee organizations representing large numbers of GS employees. To recommend locality pay adjustments, the President’s Pay Agent compares the annual GS base pay rates of federal workers in each area to the annual pay rates of nonfederal workers in the same area for the same levels and types of work. The sidebar provides details on this process. The target locality pay is the amount that reduces these differences to 5 percent. The surveys and models used for making these pay comparisons have changed somewhat between the passage of FEPCA in 1990 and the 2011 President’s Pay Agent Report (which recommends pay adjustments for 2013 and is the most current report available). Some changes were initiated by BLS, and some changes were made in response to concerns expressed by the Federal Salary Council or President’s Pay Agent. For example, BLS changed the survey used to measure nonfederal pay in 1996; the Federal Salary Council and President’s Pay Agent expressed concerns, and BLS worked together with OPM and OMB to improve the suitability of the new survey for recommending locality payments. Improvements were phased in from 2002 to 2011. Changes are summarized in app. II. As specified in FEPCA and similar to the process for across-the-board adjustments, the President may decide to either provide locality pay adjustments based on the Pay Agent’s recommendation, or provide for alternative pay adjustments based on national emergency or serious economic conditions affecting the general welfare. Additionally, Congress may legislate an average percent increase that is different from the Pay Agent’s recommendation or the President’s alternative plan; this is not part of the process specified by FEPCA. For 1994, the first year that locality payments were made, FEPCA specified that the locality increase should be not less than one fifth of the amount needed to reduce the pay disparity to 5 percent. This amount, providing a 3.95 percent average locality pay rate for the average GS employee as recommended by the Pay Agent, went into effect. In subsequent years through 2012, the effective increase has usually been far less than the one recommended by the Pay Agent, either due to a President’s alternative pay adjustment or to a law passed by Congress. Nonetheless, some locality pay increase has been provided every year since locality pay was implemented in 1994 (except during the pay freeze in 2011 and 2012), and reported disparities between federal and nonfederal pay by locality have been reduced. The President’s Pay Agent reported that pay disparities were lower in 2011 than in 1994 in 16 of the 21 pay localities that existed in both of those years. Federal Salary Council members and OPM officials we spoke with said that FEPCA was successful in its goal of improving federal pay setting for large metropolitan areas by more closely aligning pay to local labor markets. Figure 2 summarizes pay adjustments during the past 6 years, illustrating the differences between the President’s Pay Agent recommendations and the final effective amounts. These differences were driven primarily by locality pay, since the across-the-board adjustments required under the FEPCA formula were smaller and were provided in some years, while the recommended locality adjustments were larger and were not provided. For example, for 2007, the President’s Pay Agent recommended a 1.7 percent across-the-board increase to comply with the formula in FEPCA, and a 7 percent average locality increase based on BLS salary survey data. The President provided for the 1.7 percent across-the-board increase but limited the average locality increase to the alternative amount of 0.5 percent. As another example, for 2012, the FEPCA process specified a 1.1 percent across-the-board increase and an average 18.5 percent locality increase, but annual pay adjustments were frozen instead.Agent had reported that in 2010 (the reference year for setting 2012 pay), taking both across-the-board and locality pay into account, the average federal-nonfederal pay gap was 24 percent. The approximately 20 percent overall average increase recommended by the Pay Agent for The Pay 2012 would have lowered the pay disparity to FEPCA’s target of 5 percent. The pay increases and awards available to GS employees are designed to recognize individual performance to varying degrees. Across-the-board and locality pay increases, which are given to all covered employees nearly every year, are not linked to performance at all. Awards such as suggestion/invention awards and superior accomplishment awards are designed to recognize performance without being linked specifically to performance ratings. Three pay increases and monetary awards available to GS employees are linked to performance ratings as determined by agencies’ performance appraisal systems: Within-grade increases are periodic increases in a permanent employee’s rate of basic pay from one step of a grade to the next higher step within the grade. Ratings-based cash awards are lump sum cash payments that are designed to recognize performance. Quality step increases are faster-than-normal step increases that are designed to recognize excellence in performance. Agencies are permitted to provide quality step increases to eligible employees under 5 U.S.C. § 5336. Factors that are to affect GS employee eligibility for these pay increases and awards are specified in legislation and regulations and clarified in OPM guidance. two summary rating levels to a system with five summary rating levels.All the systems used by the agencies include level 3, “fully successful,” which is the pass level for a pass/fail system. In practice, based on our analysis of CPDF data from fiscal year 2011, the degree to which individual performance drove receipt of these pay increases and awards for employees in the GS pay plan varied. Of the three pay increases and awards we analyzed, within-grade increases were the least strongly linked to performance. Ratings-based cash awards were more strongly linked to performance depending on the rating system the agency used, and quality step increases were also more strongly linked to performance. Within-grade increases were the least strongly linked to performance of the three pay increases and awards we analyzed, in accordance with their design. As noted in table 1, agencies are required to provide within- grade increases to employees whose performance is at least “fully successful” and who have finished their waiting period. Over 99 percent of employees in the GS pay plan received performance ratings at or above “fully successful” in fiscal year 2011. Thirty-nine percent received within-grade increases, comprising nearly all the employees who completed their waiting period. Ratings-based cash awards were more strongly linked than within- grade increases to performance. All GS pay plan employees may receive ratings-based cash awards every year (unlike within-grade increases), so frequency limits are not a primary determinant of who receives them. In fiscal year 2011, the degree of linkage of awards with performance ratings varied by the type of appraisal system used by the agency. In fiscal year 2011, 81 percent of employees in the GS pay plan were covered by 5-level rating systems and other systems that allowed for distinctions between “fully successful” and higher levels of performance.Ratings-based cash awards for these employees were given at higher rates to employees with better performance. For example, for the 5-level system, which covered 63 percent of GS employees, awards were given to 65 percent of employees with “outstanding” ratings, 58 percent of employees with ratings “between outstanding and fully successful,” and 24 percent of employees with “fully successful” ratings. Along with the performance rating received, agency criteria were used to determine who received awards. As noted in table 1, an agency should identify any other criteria to be considered when making award recommendations and decisions, including any other awards or personnel actions that should be taken into consideration such as time off, a quality step increase, or a recent promotion. In accordance with OPM regulations, employees with higher ratings are to receive larger ratings-based awards, and award patterns reflected this distinction in 2011, as shown in figure 3. Employees who received “outstanding” ratings within the 5-level system received the largest awards. In fiscal year 2011, about 19 percent of employees in the GS pay plan were covered by a pass/fail rating system or another system that did not allow for distinctions in performance above the “fully successful” level. Over 99 percent of employees in these systems received a “fully successful” rating in fiscal year 2011, while only 31 percent received a ratings-based award, meaning that most decisions not to provide awards were made based on other criteria than ratings. Performance ratings and agency criteria, including performance-related criteria, were used to determine who received awards. Quality step increases were also more strongly linked to performance than within-grade increases. As shown in table 1, GS employees must perform at their agency’s highest possible level to be eligible to receive a quality step increase. About 49 percent of employees received the highest possible rating their agency’s system allowed in fiscal year 2011. Of those employees, about 7 percent received a quality step increase. Unlike within-grade increases, the waiting period for quality step increases is 1 year for all employees, eliminating the waiting period as a primary determinant for receiving quality step increases; rather, decisions were made based on performance rating and agency criteria, including performance-related criteria. Figure 4 illustrates the percentage of employees receiving each type of increase and award, the average amounts of the increases and awards in dollars and as a percent of the recipient’s pay, and the cost to the government of ratings-based pay increases and awards for GS employees for fiscal year 2011. OPM’s role with respect to awards and increases includes providing policy direction to agencies, including regulations, reporting on agencies’ use of awards and increases, and evaluating agencies’ linkage of awards and increases with results. Agencies, in turn, must ensure they have met statutory and regulatory requirements and may develop agency-specific criteria for providing quality step increases and cash awards. According to OPM officials, awards regulations are highly decentralized because the statutes provide agency heads, not OPM, with the authority to grant awards.agency heads may grant quality step increases within the limits of available appropriations and regulatory requirements. Policy direction. To help agencies understand how to administer pay increases and awards, OPM issues regulations and supporting memoranda and posts fact sheets, frequently asked questions, and other resource documents on its website. Topics have included approaches to calculating ratings-based cash awards, tax issues for awards, how the timing of quality step increases affects within-grade increases, and recent limitations on awards given budgetary constraints. According to OPM officials, OPM responds to agency questions about guidance as needed. Reporting. OPM provides agencies with an annual Federal Award Statistics report on cash awards, time-off awards, quality step increases, and other awards received by GS and other employees.OPM officials, OPM uses the report to show trends and compare usage of awards between agencies and across the government. OPM also uses the report data to help inform its decisions about awards policy and monitor agency compliance with the policy, such as limitations on awards usage. Evaluation. OPM evaluates selected agencies’ human capital management systems as part of its broader strategy for maintaining human capital accountability. As part of these evaluations, OPM determines whether an agency’s human capital system provides and clearly communicates linkages between employee performance expectations, performance recognition through increases and awards, and the agency’s mission. OPM also reviews a sample of case files to check that the awards granted meet the requirements of the law and and assists agencies in leading their own evaluations. regulations,OPM officials said that they have identified the following issues in regard to pay increases and awards: Some agencies tried to circumvent limitations on award amounts by issuing several incremental awards within a short time period. Some agencies granted quality step increases to compensate for low award budgets. Some agencies’ human capital management systems did not link individual performance expectations and recognition through pay increases and awards to the accomplishment of specific mission- related goals or milestones. When OPM determines that an agency violated the law or regulations, such as circumventing award limitations by issuing several incremental awards within short periods of time, it requires the agency to take corrective action and respond to OPM with evidence of how it addressed or plans to address the violation within 60 days. For example, according to an OPM official, corrective action may result in the agency recovering the award from the recipient and correcting the documentation for the award. When OPM observes an issue with an agency’s award implementation that does not violate regulations, OPM may recommend to the agency improvements that could be made. For example, when OPM determines that an agency has granted quality step increases to compensate for a low award budget, it recommends that the agency review its policies for granting pay increases and awards to ensure the policies comply with the intent of the laws and regulations. According to an OPM official, OPM requires an agency to respond to the recommendations made, but the agency is not required to take action on addressing the issue. The different study designs used by the authors of six studies resulted in varying conclusions on how federal pay differed from private sector or nonfederal pay. As shown in table 2, conclusions varied on which sector had the higher pay (which does not include benefits) and the size of pay disparities. All but one of the studies estimated the difference in pay after controlling for some personal and job-related attributes that can affect pay levels such as education and locality. This remaining difference is sometimes called the unexplained difference because it persists after controlling for attributes that can affect pay. However, the overall pay disparity number does not tell the whole story; each of the studies that examined whether differences in pay varied among categories of workers, found such variations (see table 2). For example, CBO found that federal workers with graduate and professional degrees were paid less in comparison to the private sector, while workers without college degrees were paid more. Importantly, all of the study authors acknowledged that the data they used in their analyses had limitations which could affect their findings. Any comparison of the studies needs to take these data limitations into account. For example, studies that used the Census Bureau’s Current Population Survey (CPS) were unable to directly control for years of work experience given this measure is not available in the CPS; some of the authors said that work experience is an attribute that affects how much a person is paid. Also, it was acknowledged that many federal jobs may not have equivalents in the private sector. The studies used three basic approaches to analyze differences in pay, as shown in table 3. Each author chose the approach they thought would best describe differences in pay. The Pay Agent is mandated by law to compare the rates of pay under the GS system with the rates of pay generally paid to nonfederal workers for the same levels of work within each pay locality, as determined on the basis of appropriate BLS surveys. The studies’ differing conclusions on the overall pay disparity between federal and private or nonfederal workers were affected by their basic approaches—human capital, job-to-job, and trend analysis. Across these approaches, data sources and types of attributes controlled for differed. Within each approach, conclusions differed due to studies’ specific methodologies—specific attributes controlled for and statistical methods used. Basic approaches: Across the three basic approaches, the differences in the data sources and types of attributes controlled for (personal or job-related) contributed to the differing conclusions. Data sources: The type of approach the study authors chose influenced the data sources they used. Studies using the human capital approach used data from the CPS to determine the pay for federal and private sector workers. Studies using the job-to-job approach used data from BLS’s National Compensation Survey (NCS) to determine pay for nonfederal (Pay Agent) and private sector (POGO) workers and data from OPM to determine pay for federal workers. For the trend analysis approach, Edwards used data from BEA’s national income and product accounts (NIPA)tables to determine pay for federal and private sector workers. Types of attributes difference in pay, accounting for the fact that employees earn different amounts based on education, locality, and other personal and job-related attributes. However, studies using different basic approaches controlled for different types of attributes. Studies using the human capital approach controlled for attributes related to both the individual worker and the job the person occupied. Studies using the job-to-job approach controlled for only job- related attributes. The trend analysis approach did not control for attributes. : Most of the studies estimated the unexplained Specific methodologies: Within the human capital and job-to-job approaches, the studies controlled for different specific attributes and used different statistical methods, as shown in table 4. These differences led to differing conclusions. The study authors and people with expertise in compensation issues that we interviewed differed in their views on which type of approach is most informative in comparing pay of workers across sectors. According to study authors who used the human capital approach, this approach is the standard method in the field of economics to compare workers’ pay across sectors. The overall unexplained difference between federal and private sector pay is a way to measure the extent to which the federal government may be paying more or less for the services it receives from its workers relative to what those workers could earn in the private sector. These findings could help inform policy decisions regarding the pay of federal workers. However, study authors (including those who used the human capital model) and people with expertise in compensation issues did not suggest that the human capital approach be used for setting an individual’s rate of pay. They explained that some of the personal attributes that are associated with analyzing differences in pay using a human capital approach are demographic in nature (e.g., race, gender) and not work-related. OPM officials added that they are not aware of any employers that use the human capital approach to set pay for their employees. The President’s Pay Agent and POGO used the job to job approach in their analyses of pay differences, not the human capital approach. According to OPM officials who serve as staff to the President’s Pay Agent, employees with the same human capital characteristics can choose to work in markedly different jobs with large variations in pay. POGO and some people with expertise in compensation issues said that the fundamental concept of setting pay based on the job, without taking account of the personal characteristics of individuals in similar jobs, is the most appropriate approach. They said it is not appropriate to pay individuals differently according to personal attributes, such as education or job experience, if they hold the same job. However, others said that matching individuals by occupation and level of work involved some subjective judgment and lacks transparency, which makes it difficult for other interested parties to understand the analysis. The President’s Pay Agent has stated that it has serious concerns about a process that requires a single percentage adjustment in the pay of all white-collar civilian federal employees in each locality pay area without regard to the differing labor markets for major occupational groups, and it believes that reforms of the GS system should be considered. Specifically, the Pay Agent stated that the underlying model and methodology for estimating pay gaps should be reexamined to ensure that private sector and federal sector pay comparisons are as accurate as possible. Five studies found a wide range of disparities in benefits as part of total compensation (pay and benefits) between the federal and private sector workforces, as shown in table 5. (The President’s Pay Agent Report did not include an analysis of benefits as part of total compensation.) Most studies presented the disparity in terms of total compensation, not just the benefits portion, because the levels of some benefits—for example, most retirement benefits—are a function of pay rates, years of service, and type of plan. The five studies included benefit comparisons in an effort to capture the cost of benefits to the federal government. As with their analyses of pay, the study authors acknowledged that limitations in data affected their analyses of total compensation and could affect their findings, as discussed below table 5. These limitations need to be taken into account when comparing the studies. Additionally, the studies do not all analyze the same group of federal workers; for example, POGO analyzed workers in 35 selected occupations. The wide range of estimates between the studies is due to the different data sources, types of benefits analyzed, and specific methodologies used. Data sources. Study authors agreed that available data were less adequate for comparing federal to private sector benefits than pay. Benefits data at the individual level are not available from a single source so the studies used multiple sources. This makes it challenging to compare across the sectors. For example, some data sources, such as the CPS, ask workers questions about their pay, but do not ask about the cost of their benefits because workers generally do not know the monetary value of their benefits. As a result, study authors used data sources such as the NCS that ask employers questions about the cost of their workers’ benefits. Additionally, different studies drew from different data sources, contributing to the range of different results. Biggs/Richwine used BLS’s NCS data, specifically the Employer Costs for Employee Compensation portion for private sector worker data. For federal workers, they used the OPM/OMB civilian position full fringe benefit cost factor—a percent factor describing the cost of benefits relative to salaries. To capture benefits the OPM/OMB source did not cover, Biggs/Richwine used OPM’s Federal Civilian Work Force Statistics: Work Years and Personnel Costs Report to determine paid leave, and the Annual Social and Economic Supplement of the CPS to estimate job security. POGO used NCS data on private sector workers. For federal workers, it used the OPM/OMB civilian position full fringe benefit cost factor as Biggs/Richwine did. CBO used more detailed data from the NCS and OPM for private sector and federal workers, respectively. These data were not publically available. Edwards and Sherk both used BEA’s NIPA data. According to BEA, this data source includes annual intra-governmental payments to amortize the accumulated unfunded liability of the Civil Service Retirement and Disability trust fund. This reduces the data’s accuracy for measuring compensation for current workers, according to the study authors that used the data. Sherk used OPM data to correct for this issue of federal retiree benefits. Benefits analyzed. The studies included different types of benefits in their analyses, contributing to the range of different results. In addition, the study authors made assumptions in determining the value of benefits. All of the studies included health insurance, retirement benefits, and the employer portion of mandatory government benefits such as Social Security. Biggs/Richwine, CBO, and POGO (for private sector workers only) included paid leave, while Sherk did not. Biggs/Richwine included job security, asserting that federal workers are less likely to experience periods of unemployment than private sector workers and so can expect a higher income for a given salary over the course of a year. All of the studies relied on estimates of future benefits, which requires assumptions to be made about the present value of the benefit, which may introduce uncertainty in the estimates. According to BEA, estimates of the present value of future benefits are inherently dependent on assumptions about the discount rate, participant separation rates, retirement ages, mortality, and even future pay increases and future inflation. As a result, the amount of money that has to be set aside today to pay for tomorrow’s benefits could be different. Specific methodologies. It was not possible to estimate the cost of benefits directly while controlling for differences between the federal and private workforces, so most authors used various indirect methodologies. The indirectness increased uncertainty, and the wide range of methodologies led to different results. CBO developed a model to estimate the relationship between federal workers’ pay and the cost of the benefits they received, and an analogous model for private sector workers. CBO imputed employee benefits using those models, then compared benefits for federal and private sector workers controlling for personal and job-related attributes, just as they did for pay, to estimate the portion of the difference in total compensation unexplained by attributes. CBO was the only study to use a model that allowed for varying benefits-to-pay ratios for different pay levels. Sherk calculated the difference in average total compensation for federal and private sector workers. He used his estimates of the unexplained difference in pay from the human capital model and applied this to the difference in average total compensation. He assumed the unexplained difference in total compensation was the same as the unexplained difference in pay. Biggs/Richwine used different benefits-to-pay ratios for federal workers and private sector workers. They applied these ratios to the unexplained differences in pay from their human capital model to obtain the unexplained difference in total compensation. Biggs/Richwine assumed the unexplained difference in total compensation was the same as the unexplained difference in pay. POGO used different benefits-to-pay ratios for federal workers and private sector workers. It applied these ratios to differences in pay for the selected occupations in each sector to obtain the percent difference in total compensation for these occupations. Edwards calculated the difference in average total compensation for federal and private sector workers. He did not control for attributes between the workers. The findings of the selected studies comparing federal and private sector pay and total compensation varied because they used different approaches, methods, and data. When looking within and across the studies, it is important to understand these differences because they impact how the studies can be interpreted. On the one hand, the human capital approach compares pay for individuals taking into account personal attributes such as education and job experience. Study authors who used this approach said that analyzing federal and private sector workers’ pay was a way to measure the extent to which the federal government may be overpaying or underpaying its employees compared to what they could earn in the private sector. On the other hand, the job- to-job approach compares pay for similar jobs on such job-related attributes as occupation and level of work rather than personal attributes. The President’s Pay Agent, which used this approach, examined how pay for GS and nonfederal jobs compared for the same occupations and levels of work within the same locality pay areas with the goal of reducing existing pay disparities. Simply put, the differences among the selected studies are such that comparing their results to help inform pay decisions is potentially problematic. Given the different approaches of the selected studies, their findings should not be taken in isolation as the answer to how federal pay and total compensation compares with other sectors. As stated earlier, we have reported on the importance of considering the skills, knowledge, and performance of federal employees as well as the local labor market in making pay decisions. The President’s Pay Agent has recommended that the underlying model and methodology for estimating the pay gaps be reexamined to ensure that private sector and federal sector pay comparisons are as accurate as possible. As a step in this direction, the administration recommended in its September 2011 deficit reduction proposal that Congress establish a Commission on Federal Public Service Reform composed of members of Congress, representatives from the President’s Labor-Management Council, members of the private sector, and academic experts to identify fundamental reforms for the federal government’s human capital systems including compensation reform. As of June 2012, such a commission has not been established. We provided a draft of this report to the Secretary of Commerce (for Census), the Commissioner of BLS, and the Directors of BEA and OPM for their review and comment. The Census Bureau had a technical comment on the draft report, which we incorporated into the final report. BEA and BLS had no comments on the draft report. OPM provided technical comments on the draft report, which we incorporated as appropriate. We provided applicable sections of the draft report to the authors of the selected compensation comparison studies for their review and comment. Biggs/Richwine and CBO provided technical comments, which we incorporated as appropriate. Edwards and Sherk did not have any comments on the draft section. POGO provided written comments (see app. IV). In its letter, POGO stated it concurred with our draft finding that many factors hinder public and private sector pay comparisons, such as a lack of detailed data. POGO also suggested that we analyze OPM federal-nonfederal salary comparisons as part of our final report. We believe this information is already addressed in other sections of the report, which POGO did not receive for comment. In these sections, we discuss in detail how annual pay adjustments are determined including the President’s Pay Agent process, which uses the comparisons referred to by POGO. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees; the Secretary of Commerce; the Commissioner of BLS; the Directors of BEA, Census, and OPM; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in app. V. This report examines (1) how annual pay adjustments for the General Schedule (GS) system are determined; (2) the extent to which the pay increases and awards available to GS employees recognize individual performance, and how the Office of Personnel Management (OPM) provides oversight of pay increases and awards; and (3) how selected studies compare federal and private sector pay and total compensation and the factors that may account for the different findings. To examine how the GS annual across-the-board and locality pay adjustments are determined, we reviewed legislation, OPM regulations, executive orders, Presidents’ alternative pay plans, President’s Pay Agent Reports, Federal Salary Council recommendations, OPM and Bureau of Labor Statistics (BLS) documents and reports, and reports by the Congressional Budget Office (CBO) and Congressional Research Service. We also examined how the methodology for determining locality pay has changed since the start of locality pay to the present. We interviewed selected members of the Federal Salary Council and its working group; the Council is to be made up of six representatives of federal employee groups and three experts in labor relations, and makes annual recommendations to the President’s Pay Agent. We interviewed BLS officials, OPM officials who are knowledgeable about federal pay policy and serve as staff to the President’s Pay Agent, and people with expertise in compensation issues including former federal officials experienced with pay and benefits issues. To provide background information illustrating a range of pay areas, we selected localities including the lowest paid locality, highest paid locality, and other localities to include a range of pay rates, population sizes, and geographic regions. To determine the extent to which pay increases and awards recognize individual performance, we analyzed legislation and OPM regulations on pay increases and awards available to employees in the GS pay system and identified those pay increases and awards that are determined in part by an individual’s performance rating as measured by the agency’s performance appraisal system. These pay increases and awards are: within-grade increases, quality step increases, and ratings-based cash awards. We recognize that there are other types of pay increases and awards that reflect an individual’s contributions, such as suggestion/invention and superior accomplishment awards, and pay increases that do not reflect an individual’s performance at all including across-the-board and locality pay adjustments. We identified eligibility requirements outlined in the legislation and regulations and clarified in OPM guidance that can affect a GS employee’s eligibility for the increase or award, such as a waiting period given the individual’s position in the pay grade, frequency of receiving an increase or award, and agency- specific criteria. To provide statistics on how the pay increases and awards were distributed among GS employees, we analyzed data from OPM’s Central The data we examined Personnel Data File (CPDF) for fiscal year 2011.included only federal employees in the GS pay plan. The GS classification and pay system includes several pay plan codes: GS (covered by pay system established under 5 U.S.C. chapter 53, subchapter III); GM (covers employees covered by the Performance Management and Recognition System termination provisions of Pub. L. No. 103-89); GL (covers law enforcement officers who receive special base rates at grades 3-10 under section 403 of FEPCA); GP (covers GS physicians and dentists paid market pay under 38 U.S.C. § 7431(c)); and GR (covers physicians and dentists covered by the Performance Management and Recognition System termination provisions who are paid market pay under 38 U.S.C. § 7431(c)). In addition to the GS pay plan, the GM and GL pay plans are used in federal-nonfederal pay comparisons to set locality pay. For the purposes of this analysis, we excluded the GM and GL pay plans because the GS pay plan covers the majority of the individuals in the GS, GM, and GL pay plans. We also excluded the GP and GR pay plans since individuals in these pay plans are no longer limited to GS rates of pay and they receive market pay under a different pay system. We analyzed CPDF data for employees in the GS pay plan in the aggregate on the number, percentage and dollar amount of quality step increases, within-grade increases, and ratings-based cash awards; the amount of these increases and awards as a portion of the GS payroll (total adjusted basic pay for all employees in the GS pay plan); and the distribution of these increases and awards by rating pattern and rating levels. For the award/increase amounts as percentages of recipients’ pay, we excluded employees whose adjusted basic pay amount was missing. For the calculations based on ratings, we excluded employees who were coded in CPDF as “not rated”. The not rated code applies to an employee who has not yet received a rating of record under the agency performance appraisal system (e.g., someone newly hired). We also excluded employees whose ratings were missing due to data errors. For calculations based on rating levels or patterns (e.g., 5-level system), we excluded employees who were coded as not being covered by a performance appraisal system and generally do not have their performance appraised. We also excluded employees whose rating patterns were missing from the data due to data errors. To help determine the reliability and accuracy of the CPDF data elements used, we checked the data for reasonableness and the presence of any obvious or potential errors in accuracy and completeness. For example, we excluded employees who were coded as receiving an increase or award in error (e.g., individuals who received a level 1 or 2 rating and a within-grade increase or ratings-based cash award) from our data. We also reviewed past GAO analyses of the reliability of CPDF data and interviewed OPM officials knowledgeable about the data to discuss the data’s accuracy and steps OPM takes to ensure they are reliable. For example, in its checks of the data, OPM excludes data where the dollar value is zero for ratings-based cash awards and within-grade and quality step increases. Also, for within-grade and quality step increases, OPM checks to make sure values for current and prior adjusted basic pay exist and the difference is greater than zero. On the basis of these procedures, we believe the data we used from the CPDF are sufficiently reliable for the purpose of this report. To describe how OPM provides oversight of pay increases and awards, we collected and analyzed OPM guidance to agencies on administering relevant pay increases and awards including regulations, memoranda, reports, fact sheets, and frequently asked questions. We interviewed OPM officials responsible for federal pay policies to discuss the implementation of the guidance and monitoring of agencies’ use of increases and awards through reports and other means, and we interviewed OPM officials responsible for conducting human capital management evaluations at agencies on pay increases and awards to determine how they evaluate agencies’ linkage of pay increases and awards with organizational results and monitor the overall GS system, among other things. To review selected studies that compare federal and private sector pay and total compensation and describe factors that help account for the different study findings, we reviewed the studies, summarized each study’s methodologies and key findings, and confirmed the accuracy of our summaries with the authors. We compared and contrasted the differences between the approaches, methodologies, and data sources of the selected studies. We interviewed the selected study authors to obtain their views on the various methodologies and data sources available, why they chose the ones they used, and their conclusions based on their work. From July through December 2011, we conducted a detailed literature review of academic journals, agency and organization publications, and grey literature to identify the selected studies.applied three criteria for study selection to the results—(1) studies that were published/issued since 2005; (2) studies that include original analysis; and (3) studies that have the explicit and primary purpose of We comparing federal and private sector pay and total compensation. Using these criteria, we identified at that time the following five studies as our proposed set to review (with the option to add other studies that may be issued during the course of our engagement and meet our criteria), see below: Comparing Federal and Private Sector Compensation, Andrew Biggs and Jason Richwine, American Enterprise Institute for Public Policy Research, June 2011. (Co-author Richwine is from The Heritage Foundation.) Federal Pay Continues Rapid Ascent, Chris Edwards, The Cato Institute, August 2009. Report on Locality-Based Comparability Payments for the General Schedule, Annual Report of the President’s Pay Agent 2010, The President’s Pay Agent, March 2011. Bad Business: Billions of Taxpayer Dollars Wasted on Hiring Contractors, The Project On Government Oversight, September 2011. Inflated Federal Pay: How Americans Are Overtaxed to Overpay the Civil Service, James Sherk, The Heritage Foundation, July 2010. All of the selected studies except for the President’s Pay Agent compared federal to private sector pay and total compensation. The President’s Pay Agent compared federal to nonfederal pay (not benefits) and defined nonfederal as private sector, state government, and local government. We decided to include the President’s Pay Agent Report as one of our selected studies given that it plays a major role in the overall discussion of federal pay comparability. The President’s Pay Agent encompasses the Secretary of Labor and Directors of OPM and the Office of Management and Budget (OMB). To inform our understanding of the Pay Agent’s report and process, we interviewed OPM officials who are staff to the Pay Agent, members of the Federal Salary Council and its working group including officials from the National Treasury Employees Union and the American Federation of Government Employees, and officials at BLS, which provides the nonfederal data used for the Pay Agent’s analysis. Through our literature review, we also identified articles and papers that compare compensation in other sectors (state and local government to private sector, or industry to industry). Additionally, we identified discussions of the selected studies’ findings and methodologies and of the issues of federal and private sector pay and total compensation comparison in general to further inform our review of the studies. We interviewed a number of individuals chosen for their expertise in compensation issues to obtain their views on the data sources for analyzing compensation and to provide a general context for the issues involved in comparing federal and private or nonfederal pay and total compensation. The findings regarding the selected studies are not based on input from these individuals. Representing a wide range of perspectives and experiences related to compensation issues, we identified these individuals through our literature review, background research on the topic, and recommendations from the study authors and other individuals knowledgeable about compensation issues. The selected individuals, some of whom were selected authors of the discussions noted above, included a university professor who has done research on compensation issues across sectors, a private sector compensation consultant, a staff member who researches compensation at an organization with a policy focus, and former senior federal officials who are experienced in federal pay and benefits issues. We interviewed officials from the Bureau of Economic Analysis (BEA), BLS, and Census Bureau to discuss how these agencies’ data are used to measure federal and private or nonfederal pay, compensation, or benefits, and limitations of their data or surveys. We also interviewed officials from OPM involved in federal pay policies. We asked everyone we interviewed about their views on the strengths and limitations of the data sources used in the studies. We also asked everyone we interviewed, as applicable, to identify any additional studies that address our criteria for study selection. They did not identify any additional studies that met our criteria, but provided additional information, such as background articles. However, in January 2012, after our literature review was concluded, CBO issued a report: Comparing the Compensation of Federal and Private-Sector Employees, Congressional Budget Office, January 2012. We included this study in our review because it met our criteria. This brought the total number of studies up to six. We interviewed the authors of the CBO study to obtain their views on the various methodologies and data sources available, why they chose the ones they used, their conclusions based on their work, and our understanding of their work. We did not examine the reliability or the appropriateness of the approaches, methods, and data used by the six selected studies in our scope, and we did not exclude any study on the basis of methodological quality. We conducted this performance audit from July 2011 to June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Even though the full locality payments recommended by the President’s Pay Agent have not been provided after locality pay was implemented in 1994, some locality increase has been provided each year since that time except during the pay freeze in 2011 and 2012. The President’s Pay Agent reported that pay disparities were lower in 2011 than in 1994 in 16 of the 21 pay localities that existed in both of those years. Figure 5 shows the relative pay rates for a GS-11 employee (approximately the midpoint grade level) in San Francisco and in the Rest of U.S. (the residual locality for areas not included in one of the other pay localities) and nonfederal equivalents based on the President’s Pay Agent Reports. In 1994, the pay disparity between federal and nonfederal workers in San Francisco at the GS-11 level was 30 percent, which decreased to 26 percent by 2011 (the most recent year for which disparity data is available). In 1994, the pay disparity between federal and nonfederal workers in the Rest of U.S. locality at the GS-11 level was 19 percent, which increased to 22 percent by 2011. There have been several changes to the surveys and models used for locality pay setting between the passage of the Federal Employees Pay Comparability Act (FEPCA) in 1990 and the Pay Agent process and report for 2011, the most current report available. Changes are illustrated in figure 6 and additional information is below. From 1991 to 1996, BLS conducted the Occupational Compensation Survey Program (OCSP) to collect data on pay of nonfederal workers. OCSP used a fixed list of 3 to 8 positions in each of the five PATCO categories (Professional, Administrative, Technical, Clerical, and Other White-Collar) to represent the range of different white collar jobs. In 1996, there were 26 different positions - for example, Scientist (a professional position) and Key Entry Operator (a clerical position). Each position had one or more levels - for example, Scientist I to Scientist VIII; Key Entry Operator I and Key Entry Operator II. BLS referred to a particular position at a particular level (e.g., Scientist I) as a “job.” BLS asked surveyed establishments to identify positions they had that corresponded to one of the representative jobs. BLS and OPM worked together to write, test, and maintain survey job descriptions tied to a single GS grade level. In 1996, BLS stopped conducting the OCSP and started conducting the National Compensation Survey (NCS), which uses probability sampling of jobs. BLS randomly selected positions at surveyed nonfederal establishments and determined which Standard Occupational Classification System job, PATCO category, and GS grade corresponded to the selected jobs. The Employment Cost Index (ECI) and a benefits survey were also merged into NCS. These changes were made to reduce costs and respondent burden and expand occupational coverage. The President’s Pay Agent began reviewing the NCS in 1996, with input from the Federal Salary Council. During the time of their review, they used OCSP data, aged to a common reference date based on the ECI, to calculate pay disparities and recommend locality pay. In 1998, they determined that the NCS was not suitable for use without improvements, and a working group with representatives of OPM, BLS, and OMB was formed to recommend improvements. The working group made recommendations in 1999 that led to five improvements in the NCS data. The improvements were implemented starting in 2002, at which point the Pay Agent began to phase in use of NCS data. The recommendations are outlined in figure 6 above. In 2008, the Federal Salary Council asked BLS to explore the use of additional sources of pay data so the Council could better evaluate the need for establishing additional locality pay areas, especially in areas where the NCS could not provide estimates of nonfederal pay. BLS developed a model to combine data from the Occupational Employment Statistics (OES) survey, another BLS survey, with NCS data in order to increase locality coverage. In 2010, due to budget cuts, BLS announced a reduction in the size of the NCS sample, and said that the model results from the combined surveys could still be used to calculate pay gaps. According to BLS officials, only the size of the NCS sample has changed, not the substance of what is collected, and the reduction should not affect the ability to determine levels of work. The Federal Salary Council wrote in its 2011 memo to the President’s Pay Agent that it had concerns about the reduction. For 2011, the final year when the larger NCS data set was available, the Federal Salary Council reviewed modeled results both with and without the reduction, and found concerning discrepancies (about a 5 point average difference in computed pay gaps). In its memo, the Council recommended that the Pay Agent use only NCS data for setting pay until the new model is better understood, and that the full NCS survey be reinstated. The Council wrote that it plans to continue working with OPM and BLS to study the NCS/OES model. The President’s Pay Agent wrote in its 2011 report dated March 2012 that it does not consider more funding for NCS to be feasible before exploring other options. The Pay Agent supported the Council’s plan to continue its review of the new model and to focus on the impact of dropping roughly half of the NCS sample on the volatility of the model. The Pay Agent also noted that the administration recommended Congress establish a Commission on Federal Public Service Reform composed of members of Congress, representatives from the President’s Labor-Management Council, members of the private sector, and academic experts to identify fundamental reforms for the federal government’s human capital systems including compensation reform. As of June 2012, such a commission has not been established. The six selected studies used different data sources and methodologies to analyze differences in pay between the federal and private sector or nonfederal workforces, as shown in table 6. They also varied slightly in how they defined the federal workforce and restricted their analysis of workers. Studies could control for many attributes—personal or job-related—to help explain the differences between federal and private sector pay, as shown in the previous table. The types of attributes the selected study authors controlled for depended on the type of approach used to analyze pay—human capital or job-to-job. For example, the human capital approach controls for personal attributes (e.g., education, job experience) and job-related attributes (e.g., occupation, firm size). The job-to-job approach involves controlling for job-related attributes (e.g., occupation, level of work) without considering the personal attributes of the workers. The trend analysis approach does not control for any attributes. Attributes such as occupation, level of work, firm size, locality, education, and job experience were considered relevant by several of the studies’ authors and people with expertise in compensation issues that we interviewed. Controlling for occupation allows a study to account for different pay rates for different types of jobs. The distribution of occupations in the federal government is different from the private or nonfederal sector, which may be a factor that explains differences in pay. For example, according to the CBO study, 33 percent of the federal workforce compared with 18 percent of the private sector workforce was in a professional occupation. A job-to-job approach, as demonstrated by the study authors who used the approach, involves matching federal workers to equivalent positions in another sector. POGO limited its comparison to 35 selected occupations, while the Pay Agent used over 200 occupations. According to one of the people with expertise that we interviewed, one challenge with this approach is the difficulty of finding nonfederal equivalents for certain positions, such as the Federal Bureau of Investigation agents, that only exist in the government. Another person with expertise said matching occupations across sectors is a subjective process. In contrast, study authors using the human capital approach used fewer and much broader occupational groups. For example, Biggs/Richwine used 10 categories, while Sherk and CBO used 22 and 24, respectively. In addition to his analysis of the overall pay disparity, Sherk analyzed pay data with and without occupation controls, and reported that less was explained when occupation was not included in the analysis. Level of Work: Controlling for level of work (or grade level) allows a study to account for different pay rates for different levels of job complexity and responsibility (e.g., entry-level, mid-level, senior level or finer distinctions by level). Level of work encompasses types of duties performed, the scope and effect of the work, the level of difficulty and responsibility, and the level of supervision received. It can be difficult to measure level of work since levels are defined differently in different workplace settings. Of the studies we examined, only the President’s Pay Agent Report controls for level of work. For federal employment in the GS pay system, there are 15 grade levels. To compare these with levels in nonfederal workplaces, BLS economists ranked nonfederal positions based on four factors: knowledge, job controls and complexity, contacts (nature and purpose), and physical environment.observed that the human capital approach does not recognize that there are many different levels within an occupation such as accountant or lawyer. Firm size: Controlling for firm size allows a study to account for the effect of the number of workers in a firm. Some of the study authors asserted that large firms tend to offer higher salaries and greater benefits than smaller firms, but they differed on the decision to control for this attribute. CBO and Biggs/Richwine felt that federal workers should be compared to private sector workers at similarly sized institutions (e.g., firms with at least 1,000 workers) and included a measure of firm size in their analyses. The reasons the authors cited included large firms requiring more occupational specialization or higher levels of skill than smaller firms. Sherk said he chose not to control for firm size because he views it as a proxy for individual ability in the private sector—the larger firms pay a premium to hire more capable individuals and the associated pay reflects that. He said this is not the case in the federal government; the federal government does not selectively hire employees from large corporations but competes for hiring with all sizes of firms in the private sector. Sherk felt that including firm size could bias results if more productive workers tend to work in larger firms in the private, but not the federal sector. A person with expertise we interviewed agreed that a larger firm would pay more and have better benefits and noted that large firms are in head-to-head hiring competition with the federal government. In 2008, the President’s Pay Agent decided to include data from all establishments in its locality pay recommendations to increase the amount of data available for jobs. Since locality pay began in 1994, the Pay Agent had used only data from large establishments in its calculations.expertise that we interviewed, the larger sample of data helps improve the quality of the job matching. See app. II for more information on the implementation of locality pay since 1994. organization. An individual with a master’s degree and a PhD may be paid the same pay rate in the market if they are producing the same output. Job experience: Controlling for job experience allows a study to account for the length of time an individual has spent working. Experience both at a specific job and in general can affect pay, presumably because it can affect productivity, which can be accounted for in the human capital approach. Biggs/Richwine, CBO, and Sherk considered job experience a relevant attribute. However, the CPS does not include a direct measure of job experience. As a result, the studies use proxies to measure experience. For example, Biggs/Richwine and CBO used a common approach for measuring experience, “age minus years of education minus 6,” while Sherk included age in his model. According to a university professor we interviewed who has done research on compensation issues across sectors, there is no data set that measures how long a private sector worker has been out of the workforce or how long a worker has been working for a given employer. Age can be used as a proxy, but age does not reflect time out of the workforce for child-rearing or other reasons. The selected studies varied in the data sources used, benefits included, and methodologies chosen in analyzing benefits as a part of total compensation, as shown in table 7. The President’s Pay Agent is mandated to analyze pay, not benefits, so its study is not included in the following table. The study authors had a variety of data sources to choose from in analyzing pay and total compensation. They chose the data sources for their studies based on their overall approach and data needs. The study authors and people with expertise in compensation issues that we interviewed identified strengths and limitations of two common data sources the studies used in analyzing pay or total compensation—the CPS and NCS. Agency officials who oversee these data sources also weighed in on the use of the data in analyzing compensation. Current Population Survey. The CPS—and in particular, the monthly CPS—has a large sample size relative to other data sources enabling analyses that would not have been possible in data sets with a smaller sample size. According to Sherk, he used the monthly CPS because he needed at least 30 valid observations of occupations in both the public and private sectors for his analysis comparing detailed occupations. The Annual Social and Economic Supplement of the CPS has questions that are more indepth than the monthly CPS and it contains measures of job tenure, educational degree, and firm size. Individuals interviewed for the monthly or Annual Social and Economic Supplement of the CPS are self-reporting in their responses, which can result in reporting errors. As an example of an error that could occur, individuals who work for a contractor employed by the federal government could identify themselves as federal employees, which would be incorrect. Census officials said that there are CPS interviewer manuals to assist interviewers in helping respondents with their answers. National Compensation Survey. BLS conducts the NCS by interviewing employers, which allows for cost data on pay and benefits to be directly collected from employers as opposed to individuals self-reporting the information. While the survey covers all sectors, it does not collect data on federal workers, which—according to the study authors who used the NCS—results in the need to piece together different sources of benefits information in order to get comparable data. The NCS also provides detailed pay information by occupational work level that is based on the duties and responsibilities of a job, which is a key source of information for the President’s Pay Agent when determining locality pay adjustment amounts. Recently, the sample size for the NCS was reduced, and BLS has developed a model to determine locality pay using a combination of the NCS and the Occupational Employment Statistics (OES) survey. The OES is a larger survey with broader coverage of locality areas than the NCS, but it does not contain information on levels of work. (See app. II for additional information on locality pay and the use of these surveys.) The following table provides additional details on the data sources relevant for analyzing compensation across sectors including a description of the data source and supporting methodology. In addition to the contact named above, Trina Lewis (Assistant Director), Laurel Beedon, Benjamin Bolitzer, Sara Daleski, Karin Fangman, Robert Gebhart, Janice Latimer, Rebecca Shea, and Meredith Trauner made key contributions to this report. | A careful consideration of federal pay is an essential part of fiscal stewardship and is necessary to support the recruitment and retention of a competent, successful workforce. Recent studies comparing the compensation of federal workers to workers in other sectors have produced varying findings. To improve understanding of federal pay setting, GAO was asked to examine (1) how annual pay adjustments for the GS system are determined; (2) the extent to which the pay increases and awards available to GS employees recognize individual performance, and how the Office of Personnel Management (OPM) provides oversight of pay increases and awards; and (3) how selected studies compare federal and private pay and total compensation and the factors that may account for the different findings. GAO reviewed legislation, OPM regulations, executive orders, and federal agency documents; analyzed OPM data; and interviewed agency officials. GAO reviewed six studies that met three criteria: issuance since 2005, original analysis, and focus on federal and private sector compensation. GAO compared and contrasted the differences between their approaches, methodologies, and data sources, and interviewed the studies authors, people with expertise in compensation issues, and agency officials responsible for the data. GAO provided drafts to agencies and study authors for review and comment and made technical changes as appropriate in response to comments received. One study author provided written comments concurring with the findings. GAO is not making any recommendations in this report. Annual pay adjustments for the General Schedule (GS), the pay system covering the majority of federal workers, are either determined through the process specified in the Federal Employees Pay Comparability Act of 1990 (FEPCA) or set based on percent increases authorized directly by Congress. GS employees receive an across-the-board increase (ranging from 0 to 3.8 percent since FEPCA was implemented) that has usually been made in accordance with a FEPCA formula linking increases to national private sector salary growth. This increase is the same for each employee. GS employees also receive a locality increase that varies based on their location; there were 34 pay localities in 2012. While FEPCA specifies a process designed to reduce federal-nonfederal pay gaps in each locality, in practice locality increases have usually been far less than the recommended amount, which has been over 15 percent in recent years. For 2012, when there was a freeze on annual pay adjustments, the FEPCA process had recommended a 1.1 percent across-the-board increase and an average 18.5 percent locality increase. GS employees are eligible to receive three types of pay increases and monetary awards that are linked to individual performance appraisals: within-grade increases, ratings-based cash awards, and quality step increases. Within-grade increases are the least strongly linked to performance, ratings-based cash awards are more strongly linked to performance depending on the rating system the agency uses, and quality step increases are also more strongly linked to performance. Findings of selected pay and total compensation (pay and benefit) comparison studies varied due to different approaches, methods, and data. Regarding their pay analysis, the studies conclusions varied on which sector had the higher pay and the size of pay disparities. However, the overall pay disparity number does not tell the whole story; each of the studies that examined whether differences in pay varied among categories of workers, such as highly or less educated workers or workers in different occupations, found such variations. Three approaches were used to compare pay: human capital approach (3 studies)compares pay for individuals with various personal attributes (e.g., education, experience) and other attributes (e.g., occupation, firm size); job-to-job approach (2 studies)compares pay for similar jobs of various types based on job-related attributes such as occupation, does not take into account the personal attributes of the workers currently filling them; and trend analysis approach (1 study)illustrates broad trends in pay over time without controlling for attributes of the workers or jobs. When looking within and across the studies, it is important to understand the studies differences in approach, methods, and data because they impact how the studies can be interpreted. The differences among the selected studies are such that comparing their results to help inform pay decisions is potentially problematic. Given the different approaches of the selected studies, their findings should not be taken in isolation as the answer to how federal pay and total compensation compares with other sectors. |
DOD has undergone five BRAC rounds with the most recent occurring in 2005. Under the first four rounds in 1988, 1991, 1993, and 1995, DOD closed 97 major bases and had 55 major base realignments and hundreds of minor closures and realignments. DOD has reported that under the prior BRAC rounds it had reduced the size of its domestic infrastructure by about 20 percent and had generated about $6.6 billion in net annual recurring savings for those years following the completion of the 1995 round in 2001. As a result of the 2005 BRAC decisions, DOD was slated to close an additional 25 major bases, complete 32 major realignments, and complete 755 minor base closures and realignments. At the time the BRAC decisions were finalized in November 2005, the BRAC Commission projected that implementation of these decisions would generate over $4 billion in annual recurring net savings following the completion of implementing those decisions in 2011. In accordance with BRAC statutory authority, DOD must complete closure and realignment actions by September 15, 2011—-6 years following the date the President transmits his report on the BRAC recommendations to Congress. Environmental cleanup and property transfer actions can exceed the 6-year time limit, having no deadline for completion. In addition to reducing unneeded infrastructure and generating savings, DOD envisioned the 2005 BRAC round to be one that emphasized transformation by aligning the infrastructure with the defense strategy and fostered jointness by examining and implementing opportunities for greater jointness across DOD. As such, there are a considerably higher number of realignments to take place than in any of the four prior rounds, which has resulted in far more individual BRAC actions, many of which affect multiple bases. While the number of major closures and realignments are somewhat similar to those of previous rounds (see table 1), the number of minor closures and realignments is significantly greater than those in all previous rounds combined. Available data indicate that despite the larger number of actions associated with the 2005 BRAC round compared with previous rounds, the amount of property potentially available for transfer is likely to be much less than in prior BRAC rounds. Although the total amount of acres available for transfer resulting from the 2005 BRAC round is yet to be fully determined, the preliminary number of potentially transferable acres for the 25 major bases is about 102,000 acres compared with a total of about 502,500 acres from the prior BRAC rounds combined. The extent of additional transferable acreage arising from the hundreds of minor base closures and realignments was not available at the time of our review, but is likely to be limited given the smaller size of many of those locations. A critical component to the process of transferring unneeded property arising from BRAC actions is the need to address the environmental contamination that has occurred over time due to military operations being conducted when the bases were active installations. Types of environmental contaminants found at military installations include solvents and corrosives; fuels; paint strippers and thinners; metals, such as lead, cadmium, and chromium; and unique military substances, such as nerve agents and unexploded ordnance. According to DOD officials, while environmental cleanup of these contaminants has been an ongoing process on active military bases, the cleanups often receive greater attention once a base has been selected for closure. Environmental cleanup is necessary for the transfer of unneeded contaminated property, which becomes available as a result of base closures and realignments. While addressing the environmental cleanup of contaminated property is a requirement for property transfer to other users, the sometimes decades- long cleanup process is not bound by the 6-year limitation for implementing required BRAC actions. As we have reported in the past, addressing the cleanup of contaminated properties has been a key factor related to delays in transferring unneeded BRAC property to other parties for reuse. DOD officials told us that they expect environmental cleanup to be less of an impediment during the 2005 round since the Department now had a more mature cleanup program in place to address environmental contamination on its bases. In conducting assessments of potential contamination and to determine the degree of cleanup required (on both active and closed bases), DOD must comply with cleanup standards and processes under all applicable environmental laws, regulations, and executive orders. The Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA), as amended, authorizes cleanup actions at federal facilities where there is a release of hazardous substances or the threat of such a release which can present a threat to public health and the environment. To clean up potentially contaminated sites on both active and closed bases, DOD generally follows the process that is required under CERCLA, which generally includes the following phases and activities: preliminary assessment, site investigation, remedial investigation and feasibility study, remedial design and remedial action, and long-term monitoring. (An explanation of these phases is provided in app. II.) The Superfund Amendments and Reauthorization Act of 1986 (SARA) added provisions to CERCLA specifically governing the cleanup of federal facilities, including active military bases and those that are slated for closure under BRAC and, among other things, required the Secretary of Defense to carry out the Defense Environmental Restoration Program (DERP). Following SARA’s enactment, DOD established DERP, which now consists of two subprograms: (1) the Installation Restoration Program, which addresses the cleanup of hazardous substances that are primarily controlled under CERCLA and were released into the environment prior to October 17, 1986; and (2) the Military Munitions Response Program, which addresses the cleanup of munitions including UXO and the contaminants and metals related to munitions that were released into the environment prior to September 30, 2002. Cleanups of hazardous substances released after 1986 and munitions released after 2002 are not eligible for DERP funds. These cleanups are generally referred to as non-DERP or “compliance” cleanups and often include activities that are regulated by the Resource Conservation and Recovery Act. These cleanups involve the closure and cleanup of operations associated with landfills, training ranges, and underground storage tanks and are generally funded under base operations and maintenance accounts for active bases. Once the property is determined to be unneeded and transferable to other users under BRAC, the cleanups are funded under the BRAC account. While SARA had originally required the government to warrant that all necessary cleanup actions had been taken before transferring property to nonfederal ownership, the act was amended in 1996 to expedite transfers of contaminated property. Now such property, under some circumstances, can be transferred to nonfederal users before all remedial action has been taken. However, certain conditions must exist before the department can exercise this early transfer authority, such as the property must be suitable for the intended reuse and the governor of the state must concur with the transfer. In addition to investigations into potential hazards, DOD is required to follow National Environmental Policy Act requirements and consult with local redevelopment authorities during the process of property disposal and during the process of relocating functions from one installation to another. Although the decision to close or realign installations is not subject to the National Environmental Policy Act, DOD is required to follow the act’s requirements during the process of property disposal and during the process of relocating functions from one installation to another. The National Environmental Policy Act requires federal agencies, including DOD, to consult with and obtain the comments of other federal agencies that have jurisdiction by law or special expertise with respect to any environmental impact involved with the action. DOD’s March 2006 Base Redevelopment and Realignment Manual requires the military services to prepare an Environmental Condition of Property Report for closing BRAC bases. The report is used to evaluate the environmental condition of all transferable property based on already available information on contamination. It can be used to identify “gaps” in information regarding environmental conditions and where more study is required. Environmental Condition of Property reports have replaced the former baseline surveys that were required when SARA was enacted in 1986. According to Army officials, the Army plans to have a total of 183 Environmental Condition of Property reports completed for all of its 2005 major and minor base closures by January 31, 2007. With respect to Army National Guard properties, the states will be responsible for their Environmental Condition of Property reports except for the five bases located on federal lands for which the Army will prepare the reports, if required. According to Navy officials, the Navy has completed all reports for lands affected by 2005 closures. Air Force officials reported that they will have the reports completed for all their bases, which require one, by April 2007. DOD has had a long-standing policy of not considering environmental cleanup costs in its BRAC decision making. Accordingly, the estimates using the Cost of Base Realignment Actions model, which is used to compare alternative actions during BRAC decision making, do not include the cost of environmental cleanup for BRAC-affected bases. Historically, we have agreed with DOD’s position that such costs are a liability to DOD regardless of its base closure recommendations. While such costs are not included in the Cost of Base Realignment Actions model, they are included in developing BRAC implementation budgets and recorded as a BRAC cost. Expected environmental cleanup costs for the 2005 BRAC round are not yet fully known, but they are likely to increase from current estimates. DOD’s available data indicate that at least $950 million will be needed to complete the cleanups now underway for known hazards on the major and minor bases scheduled for closure for the 2005 BRAC round. However, our prior work has indicated that as closures are implemented, more intensive environmental investigations occur and additional hazardous contamination may be uncovered resulting in higher cleanup costs. Also, the services’ estimates were based on cleanup standards that are applicable for the current use of the property, but reuse plans developed by communities sometimes lead to more stringent and thus more expensive cleanups. In addition, DOD is in the early phases of identifying and analyzing munitions hazards that may require additional cleanup at both active and BRAC bases. Furthermore, the manner in which DOD is required to report all these costs to Congress is fragmented. Of the four reports DOD annually provides to Congress on environmental cleanup costs and estimates for its bases, none gives the entire cost picture by service or base. Although DOD data indicate that at least $950 million will be needed for cleanup of the major and minor base closures resulting from the 2005 BRAC round, this figure reflects preliminary amounts that are likely to increase as more information is collected during BRAC implementation on the extent of cleanup required to safely reuse property in communities where future land use decisions have yet to be made. DOD’s best available data suggest that at least $590 million will be needed to complete the cleanup of the 25 major base closures and about $360 million will be needed for the minor closures. These amounts were developed from information contained in the Defense Environmental Programs Fiscal Year 2005 Annual Report to Congress, and they do not include all costs, such as program management costs and non-DERP costs. In addition, the 2005 BRAC round includes the closure of more than 100 reserve centers, the extent to which cleanups will be required and at what cost is largely unknown. Only 2 of these centers reported cleanup estimates in the Defense Environmental Programs Fiscal Year 2005 Annual Report to Congress. Our experience with prior BRAC round bases has shown that estimates tend to increase significantly once more detailed studies and investigations are completed. The following table provides DOD’s estimated cost to complete the environmental cleanup beyond fiscal year 2006 for the 25 major DOD base closures resulting from the 2005 BRAC round as reported in the Defense Environmental Programs Fiscal Year 2005 Annual Report to Congress. For certain bases, conflicting cost estimates appear between this report and those reported in the 2005 Defense Base Closure and Realignment Commission Report to the President. According to DOD officials, the data provided to the BRAC Commission is now outdated and estimates contained in the Defense Environmental Programs Fiscal Year 2005 Annual Report to Congress provide more current data. Table 2 shows that DOD estimates it will spend at least $590 million to clean up the 25 major bases recommended for closure in 2005. However, we believe that this figure is low for several reasons. First, the amounts in table 2 only include the cost estimate for DERP eligible cleanups—those cleanups associated with contamination occurring prior to 1986 for hazardous waste and prior to 2002 for munitions. The cost for non-DERP cleanups and program management costs are not included. These additional costs could add millions to the overall cost estimate. Second, no cleanup cost estimates were available in the Defense Environmental Programs Fiscal Year 2005 Annual Report to Congress for 5 of the 25 major base closures either because the cleanups were not eligible for DERP funding, or because the bases had not been thoroughly assessed for environmental damage. As the bases undergo more complete and in-depth environmental assessments, a clearer picture of environmental cleanup costs will likely emerge. Finally, these cost estimates will likely increase due to more in-depth investigations that are expected to address all environmental cleanup issues now that the bases have been scheduled for a BRAC closure. For example, during our visit to the Mississippi Army Ammunition Plant in June 2006, we noted that Army and contract officials were preparing an environmental condition of property assessment to pull together all known environmental issues. Army officials told us that the ammunition plant had been closed and placed in standby status since 1990 and that no aggressive environmental cleanup had taken place. When the plant was recommended for closure in 2005, the Army estimated that $8.4 million would be required to address environmental contamination caused by 2 inactive range munitions sites. Since that time, according to Army plant officials, as many as 46 more sites have been identified as having environmental concerns which will require further investigation and possible cleanup. Therefore, the total eventual cleanup costs are likely to be much higher than the current estimate of $8.4 million. DOD officials told us that the projected environmental cleanup cost estimates are relatively lower for the 2005 BRAC bases than for those of the prior rounds because the environmental conditions on the property of today’s bases are much better than those closed in previous rounds. These officials told us that this is primarily due to ongoing actions associated with DOD’s Installation Restoration Program (cleanup program) and the Military Munitions Response Program at active and BRAC bases. The restoration program addresses hazardous substances, pollutants, and other contaminants, and the munitions program addresses UXO and discarded munitions. The officials stated that contaminated sites identified under the installation restoration program are much farther along in the cleanup process than sites identified under the munitions program, primarily because the restoration program has been in existence since 1985 while the munitions program was only initiated in 2001. Our analysis of DOD-provided cleanup-phase data for the identified contaminated sites at 20 of the 25 major BRAC 2005 closures supports this assertion. For example, DOD’s data show that, as of September 30, 2005, 89 percent of the 571 installation restoration sites (508 sites) either had their cleanup remedy in place or had the remedy complete, and 91 percent (521 sites) had completed investigation studies. Comparatively, of the 50 identified munitions sites at the 20 bases, only 8 percent (4 sites) reported cleanup action complete and only 10 percent (5 sites) had completed investigation studies. However, federal cleanup officials as well as military environmental specialists told us that many of these sites may require further investigation and cleanup—and greater cleanup costs—if, as expected, the future control and use of the property shifts from the military to the private sector. Furthermore, DOD officials stated that many munitions sites were not required to be cleaned when they were operational ranges on active bases, but will require cleanup now that the bases have been closed. The Army estimates that the cost to address active ranges on their 2005 BRAC properties ranges from $37 million to $335 million and is not included in the $950 million estimate for cleanup of 2005 major and minor bases. Congress does not have complete visibility over the expected total cost of DOD’s cleanup efforts for the 2005 BRAC round or for the prior BRAC rounds because of a variety of reports that individually are incomplete and which collectively may present a confusing picture of costs. Although DOD prepares multiple reports for Congress on various environmental cleanup costs, none of them presents an overall total cost estimate per base, nor is DOD required to present this information. DOD does not fully explain the scope and limitations of the cost information presented. Transparency and complete accountability in financial reporting and budgetary backup documents are essential elements for providing Congress with a more complete picture of the total cleanup costs so it can make appropriate budgetary trade-off decisions to ensure the expeditious cleanup and transfer of properties and ultimately realize savings for the U.S. government. In order to provide a complete picture of the total cleanup costs at BRAC bases, specific information must be extracted from various reports, which we have done in order to present the total costs to clean up properties resulting from prior BRAC round decisions. Congress annually receives the following four required reports from DOD that contain environmental cleanup costs and estimates for BRAC bases, two of which also include costs for active bases. Annual BRAC Budget Appropriations Request Annual Government’s Consolidated Financial Statement Report Annual Defense Environmental Programs Report Annual Section “2907” Report A detailed description of the environmental cleanup costs and estimates included in these reports is presented in appendix III. Our review showed that none of these reports provides information in one place on the total (spent plus estimated future environmental cleanup costs) expected for all environmental cost categories (DERP, non-DERP, and program management costs) by base. DOD officials told us that Congress will often mistakenly assume that the cost data presented in the Annual Defense Environmental Programs reports to Congress are the total expected cost of the program. While these costs are typically the majority of the overall total costs, the report excludes the cost of cleanups by base that do not qualify for DERP funding. Although these non-DERP costs are presented elsewhere in the report, they are only presented in aggregate terms by service. From information contained in two of the reports, we determined that the expected environmental costs for the first four BRAC rounds will total $13.2 billion, as shown in table 3. The $9.0 billion of funding made available for the four prior BRAC rounds for all cost categories was obtained from DOD’s BRAC Budget Appropriations Request for fiscal year 2005. The budget request did not provide data on the total cost to complete the environmental cleanup at the bases. The $3.8 billion cost from fiscal year 2006 through completion for the DERP eligible cleanups (Installation Restoration Program and Military Munitions Response Program) came from one section (Appendix E, Restoration Budget Summary) in the Defense Environmental Programs Fiscal Year 2005 Annual Report to Congress. On the basis of information in this report, the time required to complete the cleanup for some bases will take decades. For example, the estimated date to complete cleanup at the former Mather Air Force Base, California, is reported as 2074, and the estimated date to complete cleanup at the former Toole Army Depot, Utah, is reported as 2032. The $0.4 billion estimated cost from fiscal year 2006 through completion for compliance (non-DERP) and program management and planning was extracted from another section of the Defense Environmental Programs Fiscal Year 2005 Annual Report to Congress (specifically, Appendix J, Installation Restoration Program and Military Munitions Response Program Status Tables) for each of the services. None of the environmental reports DOD submits to Congress provide information in one place on the total costs and future cost estimates for each of the environmental cost categories by service and by base. Further, the environmental cleanup costs and estimates DOD reports to Congress vary in their scope and limitations, but DOD does not fully explain their differences. As a result, the cost of cleaning up BRAC property lacks transparency and Congress does not have total visibility over this multibillion dollar BRAC environmental cleanup effort. DOD continues to make progress in transferring unneeded BRAC property since our last report on this subject. However, environmental cleanup of contamination continues to be a key impediment to transferring the remaining properties. Environmental cleanup issues are unique to each site but usually result from a variety of interrelated factors such as technological constraints, lengthy negotiations on regulatory compliance, and the discovery of previously unknown and therefore unaddressed environmental hazards. Since our last report on this subject in January 2005, DOD has made some progress in transferring remaining unneeded property, having transferred 78 percent, (about 390,300 acres) of the 502,500 total unneeded acres from prior BRAC rounds to federal and nonfederal entities—up from 72 percent (about 364,000 acres of the estimated 504,000 acres DOD reported at the end of fiscal year 2004) from 2 years ago. This represents an increase of about 26,300 acres from what we reported in January 2005. A breakdown of the current status of unneeded BRAC property shows that 63 percent had been transferred to nonfederal entities, 15 percent had been transferred to other federal agencies, 15 percent had been leased but not transferred, and 7 percent was untransferred and is awaiting future disposition (see fig. 1). Nearly 22 percent (112,300 acres) of the total acreage from prior BRAC rounds—7 percent (35,700 acres) of untransferred property plus 15 percent (76,600 acres) of untransferred but leased property—has not been transferred. In other words, over 68 percent (76,600 acres) of the approximate 112,300 acres of untransferred property is being leased, leaving only 32 percent (35,700 acres) that is not in reuse. Leased property, while not transferred to the user, can afford the user and DOD some benefits. Communities, for example, can choose leasing while awaiting final environmental cleanup as an interim measure to promote property reuse and job creation. DOD also benefits, in some cases, as the communities assume responsibility for costs of protecting and maintaining these leased properties. By adding leased acres to the number of transferred acres, the amount of unneeded BRAC property in reuse rises to 93 percent. However, while leasing can provide short-term reuse benefits in terms of economic development opportunities, it may delay DOD’s larger goal to expedite property transfers. As we have reported in the past, environmental cleanup issues have and continue to delay the services from rapidly transferring unneeded BRAC property. As of September 30, 2006, about 81 percent of the approximate 112,300 acres remaining to be transferred from the prior BRAC rounds (about 91,200 acres), which is located on 44 installations, have environmental contamination issues. Environmental cleanup issues are unique to each site but usually result from interrelated issues such as technological constraints, cleanup negotiations, and previously unknown environmental hazards, as described in the following examples. Sometimes the available technology needed to detect and clean up UXO is limited and not fully effective. For example, at the former Naval Air Facility in Adak, Alaska, over 5,500 acres of UXO-contaminated property have not been transferred because the technology for economically cleaning up the UXO on this remote Aleutian island does not currently exist. At the former Fort Ord Army Base in Marina, California, about 11,800 acres contaminated with UXO still require cleanup, and this effort is currently expected to take until 2021 due to the labor-intensive nature of current cleanup technology (see fig. 2). We were told by DOD officials that the detection of UXO is not only labor intensive but difficult because the technology often used for this purpose cannot easily distinguish between UXO and waste scrap metals. Prolonged negotiations between environmental regulators and DOD about compliance with environmental regulations and laws can delay property transfers. For example, at the former Fort Wingate, New Mexico, which was closed by the 1988 BRAC Commission and has about 8,800 acres of transferable property with environmental impediments, it took years of active negotiation between the Army and regulators to reach agreement for closure requirements permitted under the Resource Conservation and Recovery Act. At the former Fort Ord, California, open burning of the coastal chaparral is necessary before discovery and removal of UXO and other munitions can begin. However, according to Army officials, the number of acres that can be burned annually must be negotiated with the state and is controlled by California’s clean air standards. Additional environmental contamination can be detected after a base is recommended for closure. For example, the former McClellan Air Force Base in Sacramento, California, was recommended for closure in 1995 and traces of plutonium were found during a routine cleanup in September 2000, causing a cost increase of $21 million, and extending the completion schedule beyond 2030. Table 4 shows the most expensive “cost to complete” environmental cleanups on prior BRAC round bases. The estimated costs to complete cleanups at these 10 BRAC installations ($2.1 billion) account for more than half (55 percent) of DOD’s $3.8 billion future BRAC environmental restoration and munitions cleanup estimates for all unneeded properties on bases from the previous BRAC rounds. Although opportunities exist to expedite the cleanup and transfer of unneeded BRAC 2005 properties, as well as untransferred properties from prior BRAC rounds, it is not clear to what extent each of these opportunities are considered for BRAC properties nor what successes or challenges were seen in their application since the services are not required to report their strategies for addressing unclean and untransferred properties to the Office of the Secretary of Defense (OSD). Over the years, Congress has provided DOD with a wide range of property transfer authorities to expedite the cleanup and transfer of unneeded BRAC property, including public sales and the so-called “Early Transfer Authority,” which allows property to be transferred before all necessary cleanup actions have been completed. In prior BRAC rounds, there was more extensive use made of some tools than others, and as we previously reported, DOD could have given greater attention to early transfer authority. Each of the military services has processes in place to monitor their progress to clean and transfer BRAC properties. Also, DOD’s March 2006 Base Redevelopment and Realignment Manual, which provides cleanup and disposal guidance for BRAC 2005 properties as well as untransferred properties from prior BRAC rounds, encourages the services to make wide use of all available property transfer tools. However, the services are not required to report to OSD on the status of monitoring their progress, their strategies for transferring BRAC properties, lessons learned, or whether they are taking advantage of all available property cleanup and transfer tools. Congress has, over time, provided DOD with a wide range of property transfer mechanisms and tools to expedite the cleanup and transfer of unneeded BRAC property, including public sales, early transfer authority, and privatization. The closure and realignment of individual installations creates opportunities for those unneeded properties to be made available to others for reuse. When an installation becomes a BRAC action, the unneeded property is reported as excess. Federal property disposal laws require DOD to first screen excess property for possible reuse by defense and other federal agencies. If no federal agency needs the property, it is declared surplus and is made available to nonfederal parties, including state and local agencies, local redevelopment authorities, and the public, using various transfer tools as shown in table 5. Although prior DOD guidance to the military services promoted creativity within applicable laws and regulations to successfully close and reuse installations, DOD used some property transfer tools to a much greater extent than others. In some cases, DOD’s deference to community plans for economic development led it to use low or no-cost transfer tools more often than property sales. As BRAC has evolved, there have been differing emphases placed on the approaches used to transfer unneeded property. For example, following the 1988 round, DOD emphasized revenue generation through the sale of unneeded properties. Following the BRAC rounds in the 1990s, however, DOD underscored economic development through direct, no-cost transfers of property to the public sector. The emphasis during the 2005 BRAC round appears to be headed towards a renewed importance on achieving fair market value through various transfer authorities and the consideration of all transfer tools available to quickly transfer unneeded property to others for reuse. The services have taken some steps to expand their use of the wide array of transfer tools in recent years, most notably the Navy, which realized over $850 million in revenues from the sale of unneeded BRAC properties at two former Marine Corps air stations in California. Figure 3 illustrates the alternatives used to transfer unneeded BRAC property from the prior BRAC rounds to nonfederal entities as of September 30, 2006. 205,400 (65%) 13,300 (5%) As shown in figure 3, low- and no-cost property conveyance mechanisms accounted for 65 percent (205,400) of all acres transferred—public benefit, conservation, and economic development conveyances were used in 17 percent, 19 percent, and 29 percent, respectively–whereas public and negotiated sales accounted for 5 percent (13,300) of all acres transferred. According to DOD officials, this trend reflected deference to local community organizations and their preference for low- and no-cost conveyances. Moreover, it also reflected the difficulty in using public and negotiated sales at that time, because more time was often needed to determine the nature and extent of environmental contamination and its potential cleanup cost, to attract private property developers. However, as more information is developed at these sites and as local economic conditions change, a different approach to transferring property may now be successful, an approach which would not have worked in the past. For example, while an agreement was reached in 2000 on a no-cost economic development conveyance at the former Alameda Naval Air Station, California, the local redevelopment authority could not follow through on the terms of this conveyance to create jobs because of a decline in the local economy. Therefore, both the local redevelopment authority and the Navy were reassessing other property transfer options, including public sales, at the time of our review. Another tool for facilitating property transfers is the so-called “early transfer authority,” which is not actually a property transfer mechanism but rather an amendment to SARA, allowing the services to transfer property that has not been entirely cleaned under an authorized transfer conveyance. Recognizing that environmental cleanup has often delayed the transfer of BRAC property, Congress enacted the early transfer authority provision in 1996 which allows, under certain conditions, property to be transferred before all necessary cleanup actions have been completed. The transfer agreement identifies who will complete the cleanup and what funding the service will provide, if any. In addition, the entity assuming cleanup responsibilities will often purchase environmental insurance to insure itself against possible cost overruns. We previously reported that this tool should receive greater DOD attention and DOD has increased its use of this authority, transferring a total of about 23,700 acres using this method as of July 2006. There are typically two scenarios with which an early transfer is requested. In the first scenario, the deed to the property is provided to the new owner, such as a local redevelopment authority, and DOD continues the cleanup. For the other scenario, the user takes the deed to the property and as the new owner agrees to complete cleanup activities or to control the implementation of an ongoing cleanup at the time of transfer. Although this tool is officially called the “Transfer Authority in Connection with Payment of Environmental Remediation Costs,” it is commonly referred to as “privatization.” DOD’s March 2006 Base Redevelopment and Realignment Manual states that if the fair market value of the property is more than the cleanup cost, the purchaser must pay the military departments the difference. However, if fair market value is less than the cleanup costs, the military department may pay the purchaser the difference. Because the purchaser will be responsible for completing the cleanup, the services must confirm that the purchaser has the technical expertise and financial capability to do so before considering this approach. In terms of cost, DOD retains responsibility for funding the environmental cleanup, regardless of whether it is performed by DOD or the user. A primary advantage of using the early transfer authority is that it makes property available to the future user as soon as possible, thus allowing environmental cleanup and redevelopment activities to proceed concurrently. This can save time and costs and provide users with greater control over both activities. Furthermore, it provides communities with the means to quickly put property into productive use, create jobs, and possibly create tax revenue. DOD reported that some reasons why the services were not taking full advantage of this authority were due to a lack of information on early transfer authority by communities, how to use it, and how the process ensures the protection of public health, safety, and the environment. In addition, DOD cites a lack of support from state and local regulators as a reason for the previously limited use of this authority. However, a local redevelopment authority can purchase environmental insurance to transfer the risk of potential cost overruns from the property owner to the contractor and the insurance provider. By shifting the risk, contractors may be strongly motivated to complete the environmental cleanups in a timely and cost-efficient manner. According to one local redevelopment authority official, privatization of environmental cleanup (one scenario for achieving an early transfer) is now seen as a way to expedite the cleanup and transfer process significantly because DOD’s approach can be too methodical, while the private sector can remediate the hazards more economically and in less time. As of July 2006, the number of completed early property transfers had increased from 12 (about 8,200 acres) as of September 30, 2001, to 23 (about 23,700 acres). According to DOD officials, 8 early transfer authority actions are currently pending (in the process of being transferred), and 5 are currently being considered for the future. Table 6 provides a list of locations where early transfer authority has been completed, i.e., where a deeded transfer has been completed, as of July 2006. Although each of the military services has processes and procedures in place to monitor environmental cleanup and property transfer progress, DOD has not required the services to prepare and provide a BRAC property cleanup and transfer strategy to OSD, which has overall responsibility for overseeing the services’ implementation of environmental cleanup on unneeded BRAC properties. Without such a requirement, OSD cannot readily monitor and track the transfer tools the services are using to expedite the cleanup and transfer of BRAC properties. Further, there is less likelihood of the sharing of lessons learned among the services, and communities could be denied full economic benefits that may be possible through expedited reuse of the property. In March 2006 guidance, DOD encouraged the military services to use all appropriate means to transfer unneeded property from the BRAC 2005 round and prior BRAC rounds, and to dispose of property at the “highest and best use”. As the disposing agency, the military department has the authority to select the methods of disposing of unneeded properties. The guidance states that DOD recognizes that federal law provides it with an array of legal authorities by which to transfer property, but also recognizes that the variety of installation types and the unique circumstances of the surrounding communities do not lend themselves to a single approach. We found that each of the services monitors BRAC property cleanup and disposal progress as part of their responsibility to dispose of unneeded BRAC property. According to the Army, discussions within the Army Conveyance Team can focus on progress and problems being encountered with a current property disposal method at an installation. The Army then attempts to resolve the problem with the local redevelopment authority. In addition, the Army has developed a system to track ongoing transfer conveyances for BRAC properties so it can identify slippage and track progress. Approximately every 6 months Army environmental personnel meet to discuss funding requirements and property transfer issues. Within the Air Force Real Property Agency, environmental program reviews are performed at least twice a year to determine the extent of cleanup progress at Air Force BRAC installations. In addition, the Air Force conducts bimonthly reviews to identify potential problems and to confirm that the transfer schedule is being maintained. We were told by a Navy official that each Program Management Office regional director meets monthly with each of their BRAC teams to discuss cleanup and property disposal progress at BRAC properties and, if needed, any potential alternative approaches that could expedite cleanup and disposal. According to a key OSD official responsible for monitoring the services’ progress, the military services are not required to formally report their strategy for cleaning up and transferring BRAC properties, including sharing any challenges and successes they experienced in the use of various property disposal tools or that they fully considered using all the tools available to them. According to OSD and service officials in charge of monitoring the services’ progress in the cleanup and transfer of unneeded properties, the services currently provide OSD with only informal, ad hoc progress reports. Furthermore, these officials believe that a more regular and formal process for periodically reporting and sharing experiences with various transfer tools would be helpful to both OSD (in tracking the use of these tools) and to the services (in learning from others’ successes and failures). One service official went on to state that more is actually learned by the failures rather than the successes and those experiences should be shared. We believe that sharing information, possibly via the Internet, among and between the services, communities, and the private sector, could facilitate the exchange of ideas and the sharing of lessons learned which may in turn expedite the cleanup and transfer of BRAC properties. Without such a requirement, OSD is hampered in tracking the services’ use of these tools to assure Congress that they are taking full advantage of all opportunities to expedite the cleanup and transfer of unneeded properties so that communities can realize the full economic benefits of expeditious property reuse. An incomplete picture of environmental cleanup costs at the beginning of the implementation of BRAC 2005 relates to a piecemeal reporting of environmental cleanup costs for bases when they are in an active status, coupled with the fact that environmental cleanup information evolves over time. DOD can ensure that Congress has the most complete information available by providing more clarification and explanation as to what is included and excluded in the environmental cleanup costs it presents to Congress and include the total expected cost—both incurred costs as well as the most current estimate of expected future costs—for the cleanup at BRAC bases. Without this information, Congress cannot ensure that scarce federal resources are used in the most efficient manner to address environmental cleanup issues at unneeded DOD properties so that productive new uses for these properties can be more quickly realized. Numerous tools have been made available to DOD to help expedite the transfer of unneeded BRAC property to other users. As DOD seeks to use these tools for 2005 BRAC round bases, OSD could more effectively conduct its oversight responsibilities by requiring the services to periodically report on their progress to transfer properties and plans to take full advantage of the tools available to them. In addition, each of the services may find it useful to learn and benefit from the property transfer experiences gained with these tools within and among the services. Delays in transferring unneeded properties result in additional expense to DOD to care for and maintain these properties while the affected community receives no benefit—economic or otherwise—as it waits for the property to be redeveloped for productive use. In order to provide more complete and transparent cost information for the environmental cleanup of properties from all BRAC rounds, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology, and Logistics) to report all costs (DERP and non-DERP)—past and future—required to complete environmental cleanup at each BRAC installation and to fully explain the scope and limitations of all the environmental cleanup costs DOD reports to Congress. We suggest including this information in the annual BRAC budget justification documentation since it would accompany information Congress considers when making resource allocation decisions. In order to help ensure that the military services are taking full advantage of all tools available to clean up and transfer unneeded BRAC properties from the 2005 round, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology, and Logistics) to require that the military services periodically report to OSD on the status and proposed strategy for transferring these properties and include an assessment of the usefulness of all tools at their disposal. We suggest placing this information in an easily shared location, such as a Web site, so that each service, and even the local communities and private sector, can share and benefit from lessons learned. In written comments on a draft of this report, DOD concurred with the fundamental aspects of both of our recommendations to take actions to improve its reporting of BRAC environmental cleanup costs to Congress and to require the military services to periodically report to the Office of the Secretary of Defense on the status and proposed strategy for transferring unneeded BRAC properties. DOD’s comments are reprinted in appendix IV and addressed as appropriate in the body of the report. DOD further provided technical comments, which we also incorporated as appropriate into this report. In order to provide more complete and transparent cost information on the entire cost of environmental cleanup, DOD concurred with our basic recommendation to report all costs—past and future—required to complete environmental cleanup at each BRAC installation and to fully explain the scope and limitations of all the environmental cleanup costs DOD reports to Congress. However, DOD’s comments reflect only a partial concurrence because DOD did not agree with our suggestion to include this information in the annual BRAC budget justification documentation. DOD stated its belief that this would be counterproductive and that Congress has prescribed the types of environmental information it wants presented in the budget documentation, which DOD complies with. In making our suggestion, it was not our intent that it be considered as part of the recommendation. However, we continue to believe that the annual BRAC budget justification documentation would be the most useful place for this cost-reporting information, since this documentation is referred to by Congress when deliberating BRAC environmental cleanup funding. Nonetheless, if the Department can meet the intent of our recommendation by submitting this information in another report, we defer to the Department on how best to report this information to Congress. In order to help ensure that the military services are taking full advantage of all tools available to clean up and transfer unneeded BRAC properties from the 2005 round, DOD concurred with our recommendation to require the military services to periodically report to the Office of the Secretary of Defense on the status and proposed strategy for transferring BRAC properties and include an assessment of the usefulness of all tools at their disposal. Although DOD did not comment on our suggestion to accomplish this through a shared Web site in order to maximize the lessons learned, DOD officials embraced the idea as something easily doable in comments made during our exit interview with the agency. We are sending copies of this report to interested congressional committees; the Secretaries of Defense, the Army, Navy, and Air Force; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site on http://www.gao.gov. Please contact me on (202) 512-4523, leporeb@gao.gov, or my Assistant Director, Jim Reifsnyder, at (202) 512-4166, reifsnyderj@gao.gov, if you or your staff has any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix VI. To address our first objective to examine potential cleanup costs associated with the Base Realignment and Closure (BRAC) process, we collected and analyzed relevant documentation generated by the Office of the Secretary of Defense and the military departments, and we interviewed key officials with knowledge of BRAC cost reports and estimates. We collected and analyzed environmental cleanup cost estimates for the 25 major base closures and similar estimates for the minor closures and realignments for the 2005 BRAC round, as well as costs for the prior BRAC rounds. To gain a sense of the models used to estimate cleanup costs, we viewed a demonstration of the Remedial Action Cost Engineering Requirements System cost estimating tool used by the Army and the Air Force, and the Normalized Data cost estimating tool used by the Navy. We interviewed knowledgeable officials about BRAC environmental cleanup costs from the Army Environmental Center, the Air Force Real Property Agency, and the Navy’s Northeast BRAC Program Management Office. In addition, we visited four BRAC 2005 locations—Fort Monroe, Hampton, Virginia; Umatilla Chemical Depot, Hermiston, Oregon; Brunswick Naval Air Station, Brunswick, Maine; and the Mississippi Army Ammunition Plant, Picayune, Mississippi—to gain a better understanding of the environmental cleanup requirements facing these installations and the processes that base officials are following to estimate cleanup costs. We also interviewed Office of the Secretary of Defense and the services’ officials to gain an understanding of how the estimates derived from the services’ environmental cost estimating models are reported in various Department of Defense (DOD) environmental reports to Congress. In so doing, we analyzed the cost information contained in each report in order to derive estimated cleanup costs for the prior BRAC rounds. We also compared the cost estimates projected at the installation level with estimates that were reported to Congress to verify that the data were consistent. Although we found some discrepancies, we concluded that, overall, the DOD data were sufficiently reliable for the purposes of this report. To address our second objective to examine DOD’s progress in transferring unneeded properties from the four prior BRAC rounds, we reviewed our prior BRAC reports and reports prepared by the Congressional Research Service and DOD on this subject. Using property transfer information on the four prior BRAC rounds provided by the Office of the Secretary of Defense and the services, we updated the transfer acreage data reported in our January 2005 report in order to determine the extent of progress made in the transfer of unneeded property. We assessed the reliability of the reported transferred property acreage by interviewing knowledgeable officials and comparing acreage totals to GAO reports from prior years. Although the acreage totals change as property is transferred and more accurate land surveys are completed, we determined that the data were sufficiently reliable to provide overall comparisons. We interviewed officials from the Environmental Protection Agency’s Office of Federal Facilities and consulted with them about their concerns regarding environmental cleanup at prior BRAC round bases. We interviewed DOD and military service officials responsible for environmental cleanup at BRAC and active bases at both the headquarters and field level to clarify reasons for property transfer delays, such as technology and regulations. We visited the three BRAC bases from the four prior BRAC rounds with the most expensive estimated cost to complete for cleanups—the former McClellan Air Force Base, Sacramento, California; the former Fort Ord, Marina, California; and the former Alameda Naval Air Station, Alameda, California. During these visits, we spoke not only with military officials but also with officials from local redevelopment authorities at these installations, as well as officials from the California State Environmental Protection Agency, to determine the major impediments to property transfers. To supplement these discussions we collected data from the services on the extent that environmental issues were impeding property transfer. To address our third objective to assess possible opportunities for DOD to expedite the cleanup and transfer of unneeded BRAC properties, we reviewed relevant laws, regulations, and policies governing the cleanup and transfer of properties, and we also reviewed prior GAO and DOD reports on this subject. We also reviewed DOD’s 2006 Base Redevelopment and Realignment Manual for an assessment of tools available to the services for expediting the cleanup and property transfer. We analyzed the use of these tools to date at selected BRAC installations and compiled overall statistics on the use of these authorities in the prior BRAC rounds. We interviewed officials representing federal and state environmental regulatory agencies for their perspective on DOD cleanup activities and any opportunities for DOD to expedite the cleanup process while adhering to legal cleanup standards. In addition, during our visits to the seven installations mentioned earlier, we interviewed community officials for their perspective on the speed and quality of environmental cleanups and property transfers, and opportunities for speeding up the process. We spoke with cognizant from the Office of the Secretary of Defense (OSD) and service officials to ascertain their views as to the extent of oversight of the services’ use of existing transfer tools and the sharing of lessons learned from the property transfer process. During the course of our review, we contacted the following offices with responsibility for oversight, management, and implementation of the environmental cleanup of military and specifically, BRAC bases: Office of the Secretary of Defense Office of the Deputy Under Secretary of Defense for Acquisition, Technology and Logistics, Installations and Environment, Washington, D.C. Office of the Secretary of Defense (Comptroller), Washington, D.C. Army Office of the Assistant Chief of Staff of Installation Management, Base Realignment and Closure Division, Arlington, Virginia Office of the Deputy Assistant Secretary of the Army, Environmental Safety and Occupational Health, Washington, D.C. Army Installation Management Agency, Arlington, Virginia Army Materiel Command, Fort Belvoir, Virginia Army Environmental Center, Aberdeen, Maryland Army Corps of Engineers, Environmental Office for Formerly Used Defense Sites, Washington, D.C. Army National Guard, Arlington, Virginia Navy BRAC Program Management Office Northeast, Philadelphia, Navy BRAC Program Management Office, West, San Diego, California Navy BRAC Environmental Office, Arlington, Virginia Air Force Real Property Agency, Arlington, Virginia Air Force Audit Agency, Washington, D.C. Air National Guard, Arlington, Virginia Air Force Office of the Civil Engineer, Environmental Division, Arlington, Virginia Federal Environmental Protection Agency, Federal Facilities Branch, Association of State and Territorial Solid Waste Management Officials, Washington, D.C. State of California Environmental Protection Agency, Sacramento, Fort Ord Reuse Authority, Marina, California McClellan Local Reuse Authority, Sacramento, California Alameda Reuse and Redevelopment Authority, Alameda, California Umatilla Reuse Authority, Hermiston, Oregon Brunswick Local Redevelopment Authority, Brunswick, Maine Fort Monroe Reuse Authority, Hampton, Virginia We visited three bases closed during the prior BRAC rounds—chosen because they represent each of the three services and also have the three most expensive estimated costs to complete cleanups for sites currently undergoing cleanup: Fort Ord, Marina, California McClellan Air Force Base, Sacramento, California Alameda Naval Air Station, Alameda, California We also visited four bases scheduled for closure under the 2005 BRAC round—chosen to represent a variety of missions as well as geographic diversity: Fort Monroe, Hampton, Virginia Umatilla Chemical Depot, Hermiston, Oregon Brunswick Naval Air Station, Brunswick, Maine Mississippi Army Ammunition Plant, Picayune, Mississippi We conducted our work from January 2006 through November 2006 in accordance with generally accepted government auditing standards. The Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA), as amended, authorizes cleanup actions at federal facilities where there is a release of hazardous substances or threat of such a release. CERCLA section 120(h) contains provisions that establish requirements for the transfer or lease of federally owned property based on storage, disposal, or known release of hazardous substances. All contracts for transfer or lease must include notice of this storage, disposal, or release. Except as noted below, CERCLA section 120(h)(3) requires that transfers of federal real property by deed must also include: (a) a covenant by the United States that all remedial action necessary to protect human health and the environment has been taken prior to transfer, (b) a covenant by the United States to undertake any further remedial action found to be necessary after transfer, and (c) a clause granting access to the transferred property in case remedial action or corrective action is found to be necessary after transfer. To clean up potentially contaminated sites on both active and closed bases, the Department of Defense (DOD) generally follows the process that is required under CERCLA, which generally includes the following phases and activities: Preliminary Assessment—Available information is collected regarding contamination, including a search of historical records, to confirm whether a potential environmental contamination or military munitions hazard could be present and to determine whether further action is needed. Site Investigation—This step usually involves a walk around the site by an environmental engineer and may involve some limited soil and water sampling including an analysis to determine the extent and source(s) of the hazards. Remedial Investigation/Feasibility Study—More rigorous statistical sampling and analysis is conducted to determine the exact nature and extent of contamination and whether cleanup action is needed and, if so, select alternative cleanup approaches. This could include removal, limiting public contact, determining no further action is warranted, or cleaning of the hazardous media (soil, air, or water) on site. Remedial Design/Remedial Action—This phase involves designing and constructing the actual cleanup remedy, such as a pump and treat system for underground water, or the removal of munitions. Long-term Monitoring—At this phase, parties responsible for the cleanup periodically review the remedy in place to ensure its continued effectiveness, including checking for unexploded ordnance and conducting public education. While the Superfund Amendments and Reauthorization Act of 1986 had originally required the government to warrant that all necessary cleanup action had been taken before transferring property to nonfederal ownership, the act was amended in 1996 to expedite transfers of contaminated property. Now such property, under some circumstances, can be transferred to nonfederal users before all remedial action has been taken. However, certain conditions must exist before the department can exercise this “early transfer authority.” For example, the property must be suitable for transfer for the intended use; transfer of the property must not delay any cleanup actions; and the governor of the state where the property is located must approve the transfer. The advantage of an early transfer is that property is made available under a transfer authority to the future user as soon as possible to allow for concurrent environmental cleanup and redevelopment. The law still requires that contaminated sites must be cleaned up to ensure that past environmental hazards due to former DOD activity on transferred BRAC property are not harmful to human health or to the environment and that the property can support new use; however, the early transfer authority does allow for the concurrent cleanup and reuse of the property. The Department of Defense (DOD) annually provides Congress with four required reports that include information on environmental cleanup costs and estimates at active and Base Realignment and Closure (BRAC) installations. Each report is prepared for a different purpose, such as budgetary, financial, or program oversight, resulting in various presentations of estimated and actual cleanup costs. None of the reports, however, provides the total environmental program costs and estimates for each service and their bases. The types of environmental program costs include restoration and munitions cleanup, compliance, and program management and planning. The four annual reports are the (1) Annual BRAC Budget Appropriations Request, (2) Annual Defense Environmental Programs Report to Congress, (3) Annual Government’s Consolidated Financial Statement Report, and (4) Annual Section 2907 report. The following provides a description of the reports’ mandates, when they are issued, and the information they contain. Annual BRAC Budget Appropriations Request: Section 206 of the Defense Authorization Amendments and Base Closure and Realignment Act, Public Law 100-526, specifies the type of information required in DOD’s annual budget appropriation request for BRAC funding. DOD and the services prepare separate budget justification books that provide details for each BRAC round on funds made available for environmental cleanup and the budget request estimate for the fiscal year that the request is being made for. The environmental funded amounts and the estimate include information on all environmental costs, including restoration and munitions cleanup, compliance, and program management and planning. The information in DOD’s fiscal year 2006 budget request indicates that $9.0 billion had been made available for DERP (environmental restoration and munitions) cleanup and non-DERP (compliance and program management and planning) through fiscal year 2005 for the prior four BRAC rounds. The fiscal year 2006 budget request estimate for the environmental cleanup costs was about $378 million. DOD also presented Congress with information on the 2005 BRAC closures and realignments, which shows that DOD and the services plan to spend about $426 million on the environmental cleanup cost categories between fiscal year 2006 and 2011. The estimated amounts were presented in current or inflated dollars. Although the Annual BRAC Budget Appropriations Request report includes all categories of costs, it does not include—nor is DOD required to report—the total estimated cost to complete the environmental cleanup (past and future costs) for the BRAC bases. Annual Government’s Consolidated Financial Statement Report: As required by the Chief Financial Officer Act of 1990 and the Government Management Reform Act of 1994, DOD is required to report on its estimated environmental liabilities in the federal government’s annual fiscal year consolidated financial statements, and does so each year in its performance and accountability report to Congress. The environmental liability information for active and BRAC bases is contained in note 14 of the financial statements for fiscal year 2005 and the information contains separate line item amounts for the restoration and compliance categories. The environmental program management and planning cost amounts were included in the restoration amount and DOD uses the installations’ defense environmental programs data to compile a large portion of its environmental liabilities for financial statement reporting. The November 15, 2005, report for fiscal year 2005 activity indicates that the total BRAC restoration liability amount, or future cost to complete, was $3.5 billion. The BRAC environmental liability for compliance and program management and planning was reported as $206.5 million. The data are not inflated and are stated in current dollars. The government’s annual consolidated financial statement report presents the most complete information on the environmental cost categories for the cost to compete the cleanup. The information is reported in total for DOD and summarized for each service. However, the report does not provide information on how much has been made available for BRAC environmental cleanup, and there is no detailed information presented for individual bases. Annual Defense Environmental Programs Report to Congress: As required by section 2706 of Title 10, DOD annually submits this report to Congress. The latest report, which covered fiscal year 2005, was issued to Congress in March 2006. Different sections of the report discuss and provide planning and funding costs and cost estimate information for the various DOD environmental programs at active and BRAC bases. These sections have information on active and BRAC bases’ restoration and munitions cleanup expenditures for fiscal years 2004 and 2005 and the cost to complete the environmental cleanup from 2006 to completion. The report also presents information on non-DERP and program management and planning costs and estimates for BRAC activities in the aggregate (but not by base). The information on the expected cost to complete the restoration and munitions environmental cleanup at BRAC bases for the first four rounds shows that DOD estimates this cost at about $3.8 billion from 2006 to completion. From the section of the report that reconciles the services’ cost to complete with the reported environmental liability, we were able to sum the services’ compliance and management and support costs and determine that the total cost to complete from fiscal year 2006 for these categories totaled about $0.4 billion. The dollar amounts for cost to complete from 2006 through 2011 were inflated and the dollar amounts from fiscal year 2012 to completion were in constant 2011 dollars. While the defense environmental programs report provides ample information on environmental cleanup costs and estimates, it does not consolidate the information to obtain an overall or total environmental cleanup cost amount for each service and base. Annual Section 2907 Report: This report addresses reporting requirements specified in section 2907 of Public Law 101-510, commonly referred to as the BRAC Act, for all BRAC 2005 installations. Among other things, the 2907 report includes details on the known environmental remediation restoration and munitions cleanup issues at each base affected by the 2005 BRAC recommendation. The information provides details on the estimate to complete the cleanup at each identified site, and plans and time lines to address the cleanup. According to DOD officials, the first report issued for the 2005 BRAC was in March 2006 and the estimates are based on the restoration and munitions cleanup data contained in the defense environmental programs report. Environmental Liabilities: Long-Term Planning Hampered by Control Weaknesses and Uncertainties in the Federal Government’s Estimates. GAO-06-427. Washington, D.C.: March 31, 2006. Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. Washington, D.C.: July 1, 2005. Military Base Closures: Observations on Prior and Current BRAC Rounds. GAO-05-614. Washington, D.C.: May 3, 2005. Military Base Closures: Updated Status of Prior Base Realignments and Closures. GAO-05-138. Washington, D.C.: January 13, 2005. DOD Operational Ranges: More Reliable Cleanup Cost Estimates and a Proactive Approach to Identifying Contamination Are Needed. GAO-04- 601. Washington, D.C.: May 28, 2004. Military Munitions: DOD Needs to Develop a Comprehensive Approach for Cleaning Up Contaminated Sites. GAO-04-147. Washington, D.C.: December 19, 2003. Environmental Compliance: Better DOD Guidance Needed to Ensure That the Most Important Activities Are Funded. GAO-03-639. Washington, D.C.: June 17, 2003. Environmental Contamination: DOD Has Taken Steps to Improve Cleanup Coordination at Former Defense Sites but Clearer Guidance Is Needed to Ensure Consistency. GAO-03-146. Washington, D.C.: March 28, 2003. Military Base Closures: Progress Completing Actions from Prior Realignments and Closures. GAO-02-433. Washington, D.C.: April 5, 2002. Military Bases: Status of Prior Base Realignment and Closure Rounds. GAO/NSIAD-99-36. Washington, D.C.: December 11, 1998. Military Base Closures: Reducing High Costs of Environmental Cleanup Requires Difficult Choices. GAO/NSIAD-96-172. Washington, D.C.: September 5, 1996. In addition to the individuals named above, Barry Holman, Karen Kemper, Andy Marek, Bob Poetta, and Angie Zeidan made significant contributions to this report. Other individuals also contributing to this report include Susan Ditto, Ron La Due Lake, Steve Lipscomb, Ken Patton, Charles Perdue, and Ed Zadjura. | The cleanup of environmental contamination on unneeded property resulting from prior defense base realignment and closure (BRAC) rounds has been a key impediment to the transfer of these properties and could be an issue in the transfer and reuse of unneeded property resulting from the 2005 BRAC round. GAO's analysis of available data indicates that, when completed, the cleanup for the four prior BRAC rounds is expected to cost about $13.2 billion and additional costs will be needed for BRAC 2005 property. These costs reduce BRAC savings, especially in the short term. Because of broad congressional interest in BRAC, GAO prepared this report under the Comptroller General's authority to conduct evaluations on his own initiative. GAO's objectives were to examine costs to clean up 2005 BRAC properties, progress in transferring prior BRAC rounds properties to other users, and opportunities to expedite cleanups and transfers. To address these issues, GAO analyzed cleanup cost estimates, interviewed environmental officials and visited seven bases. While expected environmental cleanup costs for unneeded property arising from the 2005 BRAC round are not yet fully known, Department of Defense (DOD) data indicate that about $950 million will be needed to clean up these bases, adding to the estimated $13.2 billion total cleanup cost for the prior rounds. Although DOD's cleanup program has matured compared to prior BRAC rounds, there are still many unknowns and the cleanup estimate for the 2005 round should be considered preliminary. In fact, environmental cleanup costs are likely to increase as more intensive environmental investigations are undertaken, additional hazardous conditions are discovered, and future reuse plans are finalized. Furthermore, Congress does not have full visibility over the total cost of DOD's BRAC cleanup efforts because none of the four reports DOD prepares on various aspects of environmental cleanup present all types of costs--past and future--to complete cleanup at each base. Compiling a complete picture of all costs requires extracting information from multiple reports, as GAO has done to estimate the total cleanup cost for the four prior BRAC rounds. More complete and transparent cost information would assist Congress in conducting its oversight responsibilities for this multibillion dollar effort. While GAO's analysis shows that DOD continues to make progress in transferring over 502,500 acres of unneeded property from the four prior BRAC rounds--78 percent of the acres have now been transferred compared to 72 percent 2 years ago--over 112,300 acres remain untransferred. Comparatively, a total of about 102,000 acres are potentially transferable as a result of the 2005 BRAC round. Impediments to transfer continue to be related primarily to a variety of interrelated environmental cleanup issues, including limited technology to address unexploded ordnance and prolonged negotiations on compliance with environmental regulations. Opportunities exist to expedite the cleanup and transfer of unneeded 2005 BRAC properties compared with other BRAC rounds. Congress provided DOD with a wide range of property transfer authorities for prior BRAC rounds. In the past DOD did not use some tools as much as others out of deference to community land reuse plans. For example, low- and no-cost transfer tools accounted for 65 percent of all acres transferred, whereas public and negotiated sales accounted for 5 percent. DOD's March 2006 guidance now encourages the services to make full use of all tools for transferring properties resulting from both the 2005 and prior-year BRAC rounds. The services have processes in place to monitor their progress to clean up and transfer BRAC properties, but they are not required to report periodically to the Office of the Secretary of Defense on their successes and challenges in using various transfer authorities. Collectively, such lessons learned could help others expedite the cleanup and transfer of unneeded properties by maximizing the use of all available tools, thereby accelerating the economic benefits of property reuse to communities while also saving the ongoing caretaker costs being incurred by DOD for unneeded properties. |
Federal assistance for highway and bridge infrastructure—about $40 billion each year—is distributed through multiple formula and discretionary grant programs collectively known as the federal-aid highway program. The federal-aid highway program is financed through the Highway Trust Fund, a dedicated source of federal revenue based on the “user-pay principle”—that is, users of transportation systems pay for the systems’ construction through the federal tax on motor fuels, tires, and trucks. FHWA uses a decentralized organizational structure to administer the federal-aid highway program, meaning that decision-making authority is largely delegated to FHWA’s 52 division offices. FHWA division offices have 10 to 61 staff each, depending on the size of the state’s highway program. While there are variations in division office organizational structures, each typically has teams that cover areas such as planning, environment, engineering, technical services, finance, and civil rights. As of February 2012, FHWA had 2,960 staff—1,962 in the field and 998 at headquarters. FHWA’s responsibilities for the federal-aid highway program fall into two broad categories: (1) advancing the program and (2) ensuring compliance with federal law and regulations. To advance the program, FHWA engages in a range of activities to encourage the effective and efficient use of federal-aid highway funding and assist states in progressing projects through construction to improve the highway system. To accomplish these tasks, FHWA works with states to identify issues, develop and advocate solutions, approve and obligate project funding for eligible activities, and provide technical assistance and training to state DOTs. To ensure that states comply with federal laws and regulations, FHWA, through its division offices, conducts oversight of federally funded projects and reviews state DOT capacity and systems used to administer approved projects. Actual project-level oversight is divided or shared between FHWA and the state. FHWA oversees major interstate highway projects. FHWA division offices and states jointly decide how to divide oversight responsibility for other National Highway System projects. States assume oversight responsibility for projects that are not on the National Highway System. These can include locally administered projects, which are projects in which a state DOT has given approval to a local public agency (e.g. a city or county) to administer a project or phase of a project such design, property acquisition, or construction. For those projects where both FHWA and the state make decisions about oversight responsibilities, the respective responsibilities are generally mapped out in a “stewardship agreement.” This agreement defines which projects will receive “full” oversight, in which FHWA oversees most aspects of the construction process, or projects in which states assume oversight responsibility (we refer to these as “delegated” projects). Figure 1 describes aspects of oversight that are led by FHWA or the state depending on the status of the project. To evaluate state DOTs’ systems and capacity to administer approved projects, FHWA division offices assess internal controls and processes across programmatic areas such as construction, finance, property acquisition, and locally administered projects. A common tool for this type of oversight is a “process review,” which involves an analysis of key program components and processes employed by the state DOT. Typically, this includes a file review of a sample of projects, interviews with relevant state DOT staff, and field reviews when applicable. In addition, FHWA, in conjunction with the Federal Transit Administration, performs a federal certification review of Metropolitan Planning Organizations (MPO), which are responsible for transportation planning in urban areas with populations larger than 50,000 every 4 years,that periodically FHWA determines if the organization complies with applicable federal requirements. FHWA division offices have a range of corrective actions they can use if a state does not comply with federal requirements. Among other things, it may withhold funding from all or part of a project, deobligate inactive funds, withhold approval until an issue is resolved, or require corrective action plans. Over the years, the federal-aid highway program has grown to encompass broader goals, more responsibilities, and a variety of approaches; however, the concept of a federal-state partnership has been an integral feature of the highway program since it was established by the Federal Aid Road Act of 1916.legislation established federal-state responsibilities, wherein states select the placement of roads, construct, and maintain them, and the federal government sets standards and provides a portion of the funding. The Federal-Aid Highway Act of 1973 further refined the federal-state relationship by stating that “the authorization of the appropriation of Federal funds…shall in no way infringe on the sovereign rights of the States to determine which projects shall be federally financed” and defined the federal-aid highway program as a “federally assisted State This and other early highway program.” This language helped shape FHWA’s interpretation of the federal-state relationship, leading to an understanding of its role that anchors FHWA’s approach to oversight in partnership and is cited today in FHWA’s policy documents. Almost all the division administrators we surveyed described their work with states as a partnership and in ways that emphasized the importance of partnership in carrying out FHWA’s mission to advance the transportation program. In addition, as we previously reported, both FHWA and state officials believe that over the years the partnership has helped to build trust and respect between state transportation agencies and FHWA, ensuring that as partners they can accomplish tasks such as planning and building projects more efficiently and effectively. The goals and scope of the federal-aid highway program expanded during much of the 20th century, as did the roles and responsibilities of FHWA. Initially, the highway program was administered by the Department of Agriculture through the Bureau of Public Roads, a predecessor to FHWA. The bureau focused oversight at the project level to ensure that materials and construction methods met federal standards. The bureau also brought engineering expertise to the states, many of which either lacked skilled engineers or were not ensuring that federal dollars were being used to produce quality construction. The Defense Highway Act of 1941 extended the 1916 act to fund a strategic network of highways, including secondary and feeder routes. Eligibility was extended again in 1944 to include an array of other secondary roads, including rural farm-to-market roads, rural mail and bus routes, county roads, and others that became eligible for post-war federal aid. In 1950, Congress made additional roads—including county, township, and urban roads—eligible for aid. By then, however, the focus of the highway program was turning increasingly to constructing the Interstate Highway System. Designated as mandated in 1944, construction of the system began in earnest with passage of the Federal-Aid Highway Act of 1956 and establishment of the Highway Trust Fund, a dedicated funding source deriving revenue primarily from taxes on motor fuels, tires, and trucks to finance the construction of the Interstate system. Construction of the Interstate remained the focus of the federal-aid highway program in the years that followed, and Congress continued to expand the types of projects eligible for federal funds. In the 1970s, Congress expanded the federal role in bridge infrastructure by making highway bridges located on public roads and longer than 20 feet eligible for federal funds. Congress also expanded the eligibility of federal aid beyond initial construction. Under the 1916 Act and Interstate authorizations, the federal government was to fund the construction of highways while maintenance was the states’ responsibility. However, as the Interstate began to age, Congress allowed states to use federal funds for road maintenance on Interstate highways and all eligible bridges by redefining certain activities—such as resurfacing, rehabilitation, and reconstruction—as capital investments rather than maintenance. By 1991, as a result of changes over the years, use of highway funds was authorized to fund a wide range of transportation enhancement activities, including activities connected with highway beautification, historic preservation, and the establishment of bicycle and pedestrian trails. In addition to expanding the types of projects eligible for federal highway funds, over time Congress adopted legislation to achieve social goals such as advancing civil rights and environmental protection, and enhancing urban planning and economic development, which affected the federal-aid highway program and FHWA’s role and responsibilities. For example, the National Environmental Policy Act of 1969 recipients to comply with federal environmental requirements by conducting environmental reviews for federally funded transportation projects. The Federal-Aid Highway Act of 1962 established urban transportation planning as a matter of national interest and required all construction projects to be part of a continuing, comprehensive, and cooperative planning process. Other federal requirements have included requiring compliance with prevailing wage standards applicable to federal contracts under the Davis-Bacon Act, a Disadvantaged Business Enterprise program to enhance participation of women- and minority- owned businesses, and Buy America provisions for acquiring steel and other materials. As the goals of the highway program expanded, FHWA added expertise in its division offices beyond civil engineers and hired economists, right-of-way specialists, planners, historians, ecologists, safety experts, civil rights experts, and others. Pub. L. No. 91-190, 83 Stat. 852 (1970) (codified as amended at 42 U.S.C. ch. 55). in 1998 to 2,960 in February 2012. Recognizing its changing roles, responsibilities, and decline in staff levels, FHWA continued to adapt its oversight approach. In 2006, it began adopting a risk management approach to its oversight, recognizing in part that, while its role had expanded, its resources had not. FHWA has adapted to changes in demands for its oversight, but its role and responsibilities are complicated by the fact that the current federal approach to surface transportation in general—and to highways in particular—is not working well. The expansion of the program did not result from a specific rationale or plan, but rather an agglomeration of policies and programs since the 1950s without a well-defined overall vision of the national interest and federal role in our surface transportation system. Federal goals and programs are now numerous and sometimes conflicting, and federal roles are unclear. Furthermore, although DOT and FHWA establish national goals and priorities, federal highway funding is apportioned to states without regard to the accomplishment of specific outcomes or the performance of grantees. This makes it difficult to assess the extent to which funding is achieving transportation goals. For these and other reasons, funding surface transportation remains on GAO’s high-risk list. In the face of its evolving roles and responsibilities, FHWA has relied on its historical partnership with the states in which FHWA and the states work collaboratively to construct highway infrastructure. FHWA uses partnering activities and practices with the states that are, based on our review and synthesis of partnering literature, recognized as best practices. These activities and practices enable parties to achieve individual and mutually beneficial goals and results, such as expedited project time frames and cost savings. We observed the following examples of successful partnerships: Open and regular communication includes clear and candid discussions among partners as well as an understanding of the inner workings and decision-making processes of participating organizations. FHWA division and state DOT officials reported having regular formal and informal meetings (at leadership and working levels) as well as frequent contact by e-mail and telephone. Clear delineation of roles and responsibilities involves understanding individual partner roles as well as articulating responsibilities for joint actions and tasks. FHWA stewardship agreements describe the roles of FHWA divisions and state DOTs. Some stewardship agreements that we examined include detailed matrices addressing factors such as work activities and their frequency, legal authority, and specific division office and state DOT responsibilities. Proactive issue identification and resolution, in a mutually agreeable way, is closely linked to open and regular communication between partnering members. FHWA officials in several division offices told us that they work closely with their state DOT counterparts to identify problems early and develop solutions. For example, one division administrator explained that, at times, the division office is forced to tell the state DOT that, because of its approach, a certain portion, or an entire construction project, is not eligible for federal funding. However, the administrator stated that the next question the division staff asks is, “How can we do this?” to work with the state DOT to bring the project into compliance with federal requirements, and therefore allowing the state DOT to use federal funds. Conflict resolution processes include formal (documented protocols or escalation procedures) and informal (verbal agreements between parties) procedures for how to handle disagreements. FHWA division and state officials discussed their commitment to collaborative problem solving and using informal issue escalation procedures, for example, by elevating problematic issues to the leadership level for resolution. One FHWA division incorporated conflict resolution protocols into its formal partnering agreement with the state DOT. The agreement advocated using face-to-face communication for conflict resolution and outlined procedures for escalating issues. In our survey and interviews, FHWA division administrators reported that FHWA uses its partnering relationship with state DOTs to advance the federal-aid highway program by ensuring that projects move to construction in a timely fashion, facilitating knowledge transfer, and promoting federal transportation priorities. Specifically, 51 of 52 survey respondents stated that their partnering relationship with their state was very or somewhat important to their ability to achieve the mission of the federal-aid highway program. Most division administrators (44 of 52) also indicated that the partnering relationship produces multiple benefits. Some of these benefits—such as expedited project time frames and cost savings—were noted as positive outcomes of partnership in the literature we reviewed. For example, FHWA officials in a northern state with a short summer construction season told us they work closely with state DOT officials to make timely decisions and move projects along to ensure that construction can be completed during the warmer months. FHWA officials in this state generally conducted non-construction-related process reviews in the winter season so that they and state DOT officials could focus on construction inspections and construction-related process reviews during the warmer months. Similarly, in another state, FHWA and state officials told us that by working together to resolve issues expediently, they were able to complete the environmental review and approval process for a large-scale project to reduce congestion on an important regional highway in about half of the time normally required. Forty-four of 52 division administrators indicated in response to our survey that partnering was very helpful in facilitating the transfer of technical knowledge. In our site visits, FHWA officials explained that partnering helps FHWA to use technical knowledge transfer to advance the federal-aid highway program by assisting states in addressing technical or programmatic concerns, closing skill gaps, enhancing compliance, and informing decision making. FHWA division officials in one state told us that they had financed a trip by state DOT officials to learn about an alternative interchange design. As a result, the state DOT was able to make interchange improvements without removing and replacing an existing bridge, reducing costs from estimated $10 to $15 million to $3 million. Similarly, when FHWA officials observed crumbling materials used for retaining walls and supporting structures, they brought in the technical expertise of the FHWA Resource Center, which resulted in the state DOT revising its materials specifications to ensure higher- quality materials are used. Forty-nine of 52 division administrators also indicated in response to our survey that partnering helps the agency to advance federal transportation priorities. One respondent stated that “partnering helps us advance more federal priorities and achieve greater public benefit than simply being parochial authoritarians that refuse to discuss anything that doesn’t directly involve a federal dollar or regulation.” For example, officials in one FHWA division office developed a business case for an approach to address congestion in a key section of highway as an alternative to the state’s planned solution. FHWA promoted its alternative to the state DOT and other stakeholders, and ultimately the state accepted FHWA’s approach because the data showed it would be more effective and less costly. In another example, FHWA division officials believed they influenced the state DOT to improve safety by using higher-quality barriers and rumble strips on highways to alert drivers straying off the road. Likewise, to improve safety, officials in another FHWA division office promoted cable barriers on highway medians as a risk-based, lower-cost alternative to concrete barriers. Similarly, during our site visits, some FHWA officials said that their partnering relationship creates an opportunity to promote projects of national or regional significance within a state. For example, FHWA officials in one division office persuaded the state to address congestion around a toll plaza on a major interstate route. The state did not consider the project a high priority, because it did not affect most state residents as much as out-of-state drivers traveling through the state. However, FHWA division officials were able to persuade the state to construct the project when funding from the American Recovery and Reinvestment Act of 2009 (Recovery Act) became available. One of the FHWA officials commented, “Without partnerships, you lose opportunities to do things that would be good for the taxpayer.” FHWA relies on its partnering approach with state DOTs to facilitate oversight of the federal-aid highway program by engaging states in open dialogue about risks and obtaining their buy-in on program improvements. In our survey, 50 of 52 FHWA division administrators said that their division office’s partnering relationship was very or somewhat helpful in producing more effective oversight. Officials we spoke with during our site visits offered several illustrations of how partnership improves oversight. One FHWA division administrator told us that the state DOT proactively brings problems to FHWA’s attention rather than waiting for FHWA to discover them. We observed that this open dialogue about risks allows FHWA to address issues in a timely fashion and adopt a more responsive and problem-solving attitude. According to one FHWA division official, FHWA worked collaboratively with the state DOT to determine which process reviews to conduct during the year and then conducted the majority of those reviews jointly. According to the FHWA official, this practice strengthened oversight by helping to gain the state DOT’s buy-in and commitment to improving its processes, facilitating honest communication about risk areas, and creating an opportunity for FHWA to provide on-the-spot training when problems were identified. In another example, an FHWA division office holds annual meetings with state DOT officials where the two parties determine which projects should receive full oversight and which should be assumed by the state. We observed that this approach can strengthen oversight by allowing FHWA to incorporate the state’s perception of risk and weaknesses into their oversight plan by engaging in open dialogue with state officials about risks. Additionally, for each summer construction season, the two parties identify one of the state’s regions as a focus for full oversight. This allows each region and their project managers to receive training while their projects, primarily related to pavement preservation, are being reviewed by FHWA. FHWA’s partnership with state DOTs also affects its use of corrective action. FHWA emphasizes working with state DOTs to bring them back into compliance through less stringent corrective action instead of more punitive action. Responses to our survey of division administrators showed that the most frequently used corrective action in the last 3 fiscal years was withholding approval of a particular request until an issue was resolved. According to one division administrator, withholding approval provides the greatest ability to address and resolve a particular issue and encourages the state to take corrective action in a timely manner. Additionally, 43 of 52 division administrators reported that in the last 3 fiscal years they have used the threat of a corrective action, which helps to achieve compliance without actual punitive actions. Reportedly, the threat of a corrective action is effective because it communicates the consequences of not complying with federal requirements and helps to bring about problem resolution. In addition, to address deficiencies, 51 of 52 division administrators reported that they had required state DOTs to develop a corrective action plan to outline how the state would change a process or program to comply with federal requirements. When FHWA moves toward a more punitive corrective action, it is most likely to withhold funding from a part of a project. Withholding partial funding often amounts to not paying for a line item in a project’s budget. For example, one division administrator explained that when the state purchases proprietary equipment, such as certain types of light posts or signs, when a less expensive nonproprietary option is available, FHWA withholds funds for the purchase. The federal-aid highway program is a reimbursement program. As a consequence, if FHWA withholds funds, state DOTs must replace federal funding with state funding. All 52 division offices indicated they withheld partial federal funding from a project in the last 3 fiscal years, and withholding partial funding was the second most frequently used tool for corrective action. FHWA division offices reported that they rarely use their most punitive corrective actions, such as withholding funding for an entire project or organization. Although 30 division administrators we surveyed reported that they had withheld federal funding from an entire project during the past 3 fiscal years, none listed this action as one of their three most frequently used corrective actions. According to FHWA officials, such action is damaging to the state’s federal-aid highway program and provokes tension with state DOTs. FHWA officials stated that they see this measure as a last resort and try to use their partnership with the states to elicit compliance. Furthermore, division offices periodically review MPOs, which can receive federal-aid highway funding and implement construction projects. If the office declines to certify an MPO, federal funds for that organization are withheld until the deficiencies identified are corrected. FHWA division officials use partnering practices, such as open and regular communication, with state DOT officials as they exercise administrative discretion in situations where the rules and how to apply them are not clear—situations we refer to as “gray areas.” In administering the federal-aid highway program, FHWA often has discretion to take a less stringent action even when the law permits a harsher one, if circumstances warrant. Such an approach is embodied in a stewardship agreement from one division office, which states that the division office “will make use of available regulatory flexibility when in the public interest.” FHWA officials spend time and effort addressing gray areas, as they seek to make a decision that is not only consistent with federal regulations but also appropriate to the particular facts and circumstances of the situation. For example, the federal regulations governing federal-aid contracts call for state DOTs to use reasonable judgment in evaluating contractors’ good faith effort to hire women- and minority-owned businesses but do not specify the type of documentation contractors must submit to demonstrate their effort. Because there are no specifications on the type of documentation demonstrating a good faith effort, FHWA and state DOT officials must work through this gray area to determine how best to demonstrate their efforts. In our interviews and observations, we noted that FHWA officials rely on partnering practices with states when federal regulations and FHWA policies leave room for interpretation and discretion, creating gray areas for FHWA officials to resolve. For example, routine roadway maintenance is not eligible for federal reimbursement, but preventive maintenance can be.is room for interpretation and discretion between the two types of maintenance. This division administrator told us that in recent years the state DOT has sought reimbursement from FHWA for roadway maintenance activities that are typically ineligible for reimbursement—a situation he attributed to the economic environment affecting state budgets. This required time and effort by both FHWA and the state DOT to work through their different interpretations of the regulation. Ultimately, FHWA and the state agreed to develop asset management systems to identify and prioritize preventive maintenance needs in a systematic way. The division administrator explained that this approach would show and document how the maintenance strategies would extend the roadway life and prevent deterioration and higher maintenance costs later, which would make these costs eligible for federal funding. According to an FHWA division administrator in one state, there We also observed FHWA using partnering practices to negotiate the gray areas that may arise when the rules are clear, but practical considerations complicate implementing the rule. When implementing a rule, public officials may need to consider cost-benefit implications, time frames, local economic conditions, or other local circumstances that are not necessarily dealt with explicitly in rules or regulations. For example, FHWA officials in one division office explained that FHWA’s regulations require roadside guardrails on National Highway System routes to be a minimum height of 27 ¾ inches from the top of the guardrail to the top of the pavement. The height of a guardrail governs its effectiveness. However, as states overlay pavement with new asphalt to address road deterioration, the height of the guardrail relative to the road surface decreases and the guardrail becomes less effective. This creates practical trade-offs with regard to the costs of guardrail replacement and safety, raising questions regarding whether to use funds to improve the pavement condition of, for example, 20 miles of road without replacing guardrails or to pave fewer miles of road but replace the guardrails to ensure they are at the full height prescribed in the regulation. Both approaches offer safety benefits. Determining the best course of action requires navigating a gray area and requires FHWA to understand the state’s priorities, weigh the safety outcomes, and use its partnership with the state to agree on an approach that meets transportation needs and federal responsibilities. While FHWA officials largely viewed their partnering relationship with state DOTs in positive terms, state officials offered a more tempered response. Specifically, our interviews and discussion groups with officials from 38 state DOTs revealed that while states acknowledged having good working relationships with their FHWA division counterparts, they also expressed some frustrations. On the positive side, state DOT officials appreciated regular and ongoing communication with FHWA officials and characterized FHWA staff as accessible, responsive, and solution- oriented. State DOT officials told us that stewardship agreements were helpful in clarifying roles and expectations and that they consider risk- based oversight to be a strength of the FHWA-state relationship. Officials also appreciated FHWA’s help in navigating federal requirements and sharing technical expertise and industry best practices. Furthermore, state officials appreciated that FHWA officials recognized the unique needs, context, and features of their state. State DOT officials participating in our discussion groups asserted that they do not want a “one size fits all” FHWA. However, state DOT officials’ positive feedback about FHWA was tempered with other perspectives on partnering and FHWA decision making. We noted three themes among the comments of state DOT officials when voicing perspectives different from FHWA. 1. State officials viewed partnership less favorably than FHWA. Many state DOT officials characterized FHWA’s role as providing oversight and enforcing regulations rather than acting as a partner. Some officials indicated FHWA began emphasizing enforcement over partnership around the time of the completion of the Interstate Highway System, as FHWA responded to legislative changes, adopting what many state officials viewed as an audit-focused approach to oversight. According to some state DOT officials, the sense of camaraderie between state DOT and FHWA officials that existed during the building of the Interstate is no longer there, and currently there is “less partnership and more regulation.” Reflecting the states’ more tempered perspective, one state DOT official characterized the relationship with FHWA as “a partnership within an arranged marriage.” 2. State officials viewed FHWA as imposing personal preferences. Many state DOT officials told us that FHWA officials routinely imposed personal preferences—for example, questioning particular design solutions—and would threaten to withhold federal funds or approval even though, in the states’ view, the approach the state had developed complied with federal standards and regulations. State DOT officials pointed out that these preferences are not covered in the regulations but rather involve professional judgment regarding such factors as cost, appearance, and durability that are not prescribed in regulation and are often unique to a particular construction project. For example, according to officials at one state DOT, FHWA had a preference for sequencing construction activities in a particular way rather than leaving the decision up to the state DOT, and on this particular project FHWA made its preference a requirement. Some states also said that FHWA gives too much focus to smaller issues and is overly involved in routine matters. 3. State officials were frustrated by inconsistencies in FHWA’s decision making across states. Many state DOT officials expressed frustration about the inconsistencies they perceived in FHWA’s decisions across states. Specifically, several noted that FHWA division offices in other states had been more permissive of certain solutions or requirements compared to the FHWA division office in their state and stated that FHWA does not always use the maximum flexibility it has at its disposal in interpreting federal rules. For example, a state DOT official told us that the division office in his state did not approve a certain material for markings on the state’s highways, but he learned that the same material had been approved in 17 other states. The inconsistencies experienced by state DOTs may not be unreasonable and could stem from the decentralized nature of the federal-aid highway program and the fundamental challenge FHWA and the states face in navigating gray areas on complex projects with unique political, financial, engineering, and other challenges. These complexities likely contribute to FHWA and state DOTs’ differing perspectives. For example, we previously reported a case in which a state DOT planned to construct new soundwalls on an existing highway. FHWA noted that the state was planning to widen the road a few years later and that the walls would likely have to be destroyed and rebuilt. FHWA recommended that the state construct the walls at the location envisioned for the widening project, but state DOT officials resisted because of the additional costs to acquire property. FHWA then informed the state that it would only fund construction of the walls once—either at the location along the existing highway as the state had planned or at the location needed once the road was widened. This example illustrates a case in which state officials may have viewed FHWA as imposing its personal preferences and may have been aware of similar situations in other states in which FHWA officials made different decisions inconsistent with this approach. FHWA, on the other hand, may have viewed its decision as exercising professional judgment to promote the most long-term cost-effective solution, consistent with its role as a steward of federal funds. While successful partnering relationships offer benefits, they also present potential risks, according to the literature we reviewed. First, one partner may grow lax in holding the other to standards. Second, one partner can lose independence in its decisions. We observed cases where FHWA was lax in its oversight by trusting but not verifying state activities and cases where FHWA demonstrated reluctance to take corrective action to bring states back into compliance, which can result in ineffective, wasteful, and potentially improper use of federal funds. We also observed instances in which FHWA sometimes showed a lack of independence in decisions, putting the states’ interests above federal ones, and other instances in which FHWA took extraordinary measures to advance the program to the point of becoming actively and closely involved in implementing solutions to state problems. This can create an inherent conflict when FHWA later must review and approve those actions or review their effectiveness. Despite the risks partnership poses, FHWA has good oversight practices in several areas of the federal-aid highway program. We have expressed concerns about the risks posed by FHWA’s partnership approach in the past. The Central Artery/Tunnel project in Boston, Massachusetts, provides examples of both lax oversight and a lack of independence that resulted in ineffective and inefficient use of federal funds and damaged FHWA’s credibility.one of the largest, most complex, and expensive ever undertaken— This highway project— experienced widely reported cost increases, growing from around $2.3 billion in the mid-1980s to almost $15 billion in 2004. From 1995 through 1997, we reported concerns about cost growth and funding gaps on the project and weaknesses in FHWA’s efforts to address them and to hold the state accountable. In March 2000, an FHWA task force charged with reviewing FHWA’s oversight of the project concluded that “FHWA’s long history of strong Federal/State partnerships failed” and that FHWA “had failed to maintain an independent enough relationship with the state to adequately fulfill its oversight role.” The task force attributed lax oversight to FHWA placing too much trust in the state, reporting that FHWA’s partnership approach failed to achieve independent and critical oversight of the project. As this example illustrates, although FHWA has experienced partnership risks to its programs in the recent past, FHWA division administrators generally do not recognize the risks of partnering as significant. In our survey, more than half (29) stated they did not believe that partnering creates any risks to their oversight of the federal-aid highway program. Of the remainder, 5 said it may create “some risk,” 17 said there was a “slight risk,” and only one stated partnering was a “significant risk” to oversight. In some instances, FHWA was lax in its oversight in that it did not verify compliance with the requirements of the federal-aid highway program, instead trusting states to ensure its actions were in compliance, which could have resulted in ineffective, wasteful, and potentially improper use of federal funds. For example: Our November 2011 report on the Emergency Relief program (which provides funds to states to repair roads damaged by natural disasters and catastrophic failures) found that many of the project files reviewed did not contain documentation to support FHWA decisions that projects met program eligibility requirements. Specifically, of the 83 projects reviewed, 81 projects (representing $193 million in federal funds) had missing or incomplete documentation. As a result, we were unable to determine the basis of FHWA’s eligibility decisions for many of the projects reviewed. We also found that FHWA divisions relied heavily upon the information provided by states to make FHWA eligibility decisions without verifying that information. For example, one FHWA division office reported that it reviewed preliminary cost estimates for about one-third of the projects included in our review before determining that projects were eligible.not determine the basis of FHWA’s eligibility decisions for those project cost estimates it did not review and as such, FHWA ran the risk of providing funds to ineligible projects. As a result, we could In the Disadvantaged Business Enterprise (DBE) program, which aims to increase the participation of small businesses owned and controlled by socially and economically disadvantaged individuals, state DOTs are among those entities responsible for certifying firms to In an interview with one FHWA division office, the FHWA participate. official said that he knows that the state DOT official is very experienced with the DBE certification process and, because of that, relies on the state to make certification decisions consistent with federal regulations. As a result, the official stated that FHWA is generally not involved in verifying the eligibility of DBE firms certified by the state. Although FHWA is not required to review every certification, in this instance FHWA’s partnering relationship with the state influenced the level of oversight conducted in this area and exposed FHWA to the risk that ineligible firms might be certified as DBEs. Officials from the FHWA division offices we spoke to said they tended not to do unannounced inspections. Instead, FHWA alerts the relevant construction sites and offices of an inspection ahead of time. Officials from one FHWA division office explained that they rely primarily on announced visits because they do not want to create a “gotcha” environment, which might hurt their relationship with the state. Division officials from another office explained that announcing inspections gives state DOT staff time to do things like assemble the appropriate records or personnel for FHWA’s inspection or allows FHWA to observe specific activities, such as materials testing, on the day that particular activity is occurring. While there are some advantages to announced inspections, the Institute of Internal Auditors includes unannounced visits as a common practice used by firms to mitigate risks associated with partnering. By not conducting unannounced inspections, FHWA is essentially trusting the state and its contractors to put compliance with federal requirements over meeting competing demands like cost and schedule. In doing so, it may be missing the opportunity to more accurately verify compliance with federal requirements, observe normal operations, and create an environment conducive to compliance. The partnering relationship between FHWA and state DOTs at times may have also resulted in FHWA being reluctant to require corrective action to bring a state back into compliance with program requirements. Specifically, FHWA staff acknowledged that, in their daily decision making, they have to think about how to preserve their relationship with their state counterpart and that they view taking corrective action as potentially damaging to that relationship. For example: We and the DOT Office of Inspector General have reported multiple times on the problem of funds committed to inactive federal-aid highway projects. FHWA has made it a priority to decrease nationally the number of outstanding inactive projects to ensure that federal funds are being used in a timely and effective way. For example, FHWA had reduced the percent of funds obligated to inactive projects to about 3.4 percent of all obligations by March of 2012—this percentage had stood at around 8 percent as recently as September 2010. In 2008, it established a Financial Integrity Review and Evaluation program requiring division offices to conduct a quarterly review of inactive projects and determine the validity of the amount obligated for each project. FHWA division offices have the authority to de-obligate funds from inactive projects. However, FHWA division officials with oversight responsibility for three states we visited expressed reluctance to use this authority because of concerns that it would negatively affect their working relationship with the state. Instead, these division offices negotiated with state officials to get them to explicitly agree to allow FHWA to de-obligate funds. FHWA officials acknowledged that this is a long, time-intensive process. For example, over the course of 6 months, one FHWA division office sent reminder letters with specific deadlines for the state to provide a rationale for allowing inactive funds to remain obligated. Yet at the end of this process there were still outstanding inactive projects that had not been resolved. In another state, FHWA finance personnel described having ongoing conversations with their state counterparts, asking them the status of inactive projects and negotiating to de-obligate those funds. The FHWA division office described the process as “walking the tightrope” with the states when making decisions to de-obligate. The amount of time officials we spoke with devoted to addressing inactive funds raises questions about whether, on the whole, division offices could have moved more quickly to make these funds available to other needed projects had officials not had to consider the impact of withdrawing funds on their partnership with the state. One state identified serious compliance issues with one of its major cities dating as far back as 2003, including federal construction specifications not being followed, insufficient field equipment, and lack of appropriate construction supervision. In 2009, FHWA withheld funding from the city for about 2 weeks while the state DOT drafted a corrective action plan. FHWA approved the plan and resumed funding. However, nearly 2 years later, as of August 2011, there were still points in the plan that had not been addressed. As a result, federal funds continued to flow to projects that may not have fully met federal requirements. FHWA can delegate to the state the responsibility of approving consultant contracts to ensure compliance with federal regulations. One state DOT lacked FHWA-approved written procedures for how it selects consultants, which are necessary to comply with federal regulations. The FHWA division office had given the state an extended opportunity—about 5 years—to address the compliance issue, allowing it to use interim procedures as long as the state DOT was developing final procedures and planning to have them approved by FHWA. After 5 years, due to the failure of the state to develop final procedures, the FHWA division office suspended all state DOT contract approvals and temporarily re-assumed the responsibility of approving consultant contracts to ensure compliance with federal procurement regulations. Once the state DOT developed written procedures and they were approved, FHWA restored consultant contract approval authority to the state DOT. As discussed earlier, FHWA division administrators reported that they rarely use their most punitive corrective action tools such as cutting off funding for a program or organization. While FHWA, in cooperation with the Federal Transit Administration, is responsible for certifying that MPOs meet federal requirements, as we reported in September 2009, FTA and FHWA officials were unaware of any instance in which an MPO was not certified due to noncompliance in the previous 10 years. In our survey of division administrators, for fiscal years 2009 through 2012, one FHWA division administrator reported that he withheld certification of an MPO due to issues in its congestion mitigation plan. However, FHWA still allowed project approvals to move forward. The FHWA division office put together a corrective action plan with the MPO, identifying action steps, deadlines, and people responsible. According to the FHWA division administrator, as long as the state is making progress toward resolving the issue, FHWA will not prevent the MPO from obtaining project approvals and moving construction forward. He noted that MPOs have a 4-year window to become recertified, and if an MPO reached the end of its window but still had not taken sufficient action for FHWA to certify it, FHWA would likely provide an extension and grant the MPO “conditional” certification, rather than decertify it. Similarly, none of the 52 FHWA division administrators stated they had not approved their state’s bridge program any time in the past 3 fiscal years. In 2010, the DOT Office of Inspector General found some cases where FHWA bridge engineers reported that a state’s bridge program substantially complied with federal regulations despite deficiencies that could have posed serious risks to public safety. For example, one FHWA bridge engineer judged a state to be substantially compliant despite reporting that the state failed to close 96 bridges, as required. A bridge engineer in another state reported that 47 bridges were not closed as required, but concluded that the state was substantially compliant. In two other cases, FHWA bridge engineers reported states as substantially compliant even though 200 bridges in one case and over 500 bridges in the other case were not posted with maximum weight limit signs, as required. GAO-10-604 and GAO, Recovery Act: Opportunities to Improve Management and GAO, Emergency Relief: Status of the Replacement of the Cypress Viaduct, GAO/RCED-96-136 (Washington, D.C.: May 6, 1996). questioned funding these improvements and additional costs through the Emergency Relief program, rather than through the annual formula funding states receive from the federal-aid highway program. This decision provided California with over $1 billion in additional funding that it then did not have to utilize from regular federal-aid funds or state sources. In 2007 and 2011, we reported additional cases of FHWA using the Emergency Relief program to fund projects that had grown in scope and cost as a result of environmental and community concerns. In 2007, we recommended that FHWA revise its regulations to tighten eligibility criteria and place limits on the use of Emergency Relief program funds to fully finance projects with scope and costs that have grown as a result of environmental and community concerns. FHWA has not acted on this recommendation. In addition, FHWA has on occasion taken extraordinary measures, expending considerable resources to advance the program, to the point of becoming actively and closely involved in developing and implementing solutions to state problems. When an overseer becomes part of the solution, the arms-length, independent perspective may be lost, as agencies that are responsible for implementing program improvements face an inherent conflict when they later approve those actions or review their effectiveness. For example: FHWA spent a substantial amount of time and effort with state DOT personnel and others trying to determine if funds used on a private bridge were eligible for use to help the state meet its matching requirement for federal funding. According to division office officials, the state was struggling to meet the 20 percent funding match required of states in order to receive the 80 percent federal-aid highway funding, due to the economic recession and poor fiscal situation of the state. Division office officials further explained that had it not met the match, the state could have lost about $200 million in federal funds in fiscal year 2012. States may receive “toll credits”— funds that can be credited by FHWA toward the state’s federal match—if it can demonstrate that toll revenues were spent on facility improvements and meet other requirements. In an effort to meet its match, officials explained that the state identified a private toll bridge that it repaired and improved using toll revenue, but had never claimed the revenues as federal toll credits. The FHWA division office committed staff, including their financial manager, to work with the state DOT, the private bridge company, and an outside auditing firm to determine the eligibility of the toll credits. Together they identified more than $50 million in eligible toll expenditures, finding individual line item expenditures in areas such as preventative maintenance and capital improvements related to tolling equipment and real estate acquisition. The eligible toll credits helped the state meet its fiscal year 2012 federal match requirement. While FHWA officials characterized the Division’s activities as appropriate technical assistance that was needed because the state did not have the skills to identify these credits, it placed the agency in a position of approving actions it was actively and closely involved in developing. In one state, an FHWA division office detailed a staff person to work full-time on-site at the state DOT to help bring the state into compliance with the requirements of the Highway Beautification Act of 1965. Division office officials had identified ongoing compliance issues with the state’s outdoor advertising program, including multiple examples of signs that were not in compliance with the state’s agreement with DOT. As a result, FHWA could have withheld 10 percent of the state’s federal-aid funds. However, the division did not withhold these funds. The division adopted an approach that entailed considerable time and effort on the part of FHWA by detailing a staff person to (1) research sign regulations on federally controlled routes to determine compliance with the federal-state agreement and the Highway Beautification Act, (2) review the state’s outdoor advertising inventory to determine the status of signs, and (3) provide interpretations, clarifications, and authoritative determinations concerning FHWA policy, among other activities. Despite the risks partnership poses, in several areas of the federal-aid highway program, FHWA has good oversight practices. The Institute of Internal Auditors has identified segregation of duties in the type of partnership FHWA has with state DOTs as one of the most common practices used in managing partnership-related risks. During the administration of the Recovery Act, FHWA developed the National Review Teams (NRT) composed of FHWA staff—separated from the rest of the FHWA—to act as a neutral third party to conduct oversight. The NRTs were able to maintain their neutrality and objectivity in part because they did not have to concern themselves with preserving a partnering relationship while conducting oversight and making recommendations for action. In addition, the findings and recommendations of the NRTs were reported both to the FHWA division office, which was responsible for developing action items in response, and the responsible FHWA Director of Field Services, who was responsible for ensuring the action items were completed within the established time frames. This practice of providing an independent review had several benefits. According to FHWA officials, it provided a consistent, comparative perspective on the oversight regularly conducted by division offices, and it gathered information at the national level on both best practices and recurring trouble spots across FHWA division offices; additional “boots on the ground” for project-level oversight and increased awareness of federal oversight activity among states, MPOs, and other transportation organizations receiving Recovery Act funds; and an independent outside voice to examine the Recovery Act projects and point out problems, keeping the partnering relationship between the division offices and the state DOTs intact. The response to the NRT reviews from both division office and state officials with whom we spoke was positive. For example, division office officials said that the NRT reviews often echoed their own observations of weaknesses in the state DOT’s program, but they said the state DOT seemed more inclined to act because the NRT was a fresh voice presenting the observations. Division office officials also told us that having the NRT point out deficiencies was helpful to them in maintaining their partnering relationship with the state. State DOT officials we spoke with in our discussion groups generally agreed that the NRT reviews, while they created an additional burden, provided an independent third- party perspective during the implementation of the Recovery Act. In administering the federal-aid highway program, FHWA makes use of two practices that facilitate good oversight: A risk management approach to oversight. Conducting risk assessments, which are part of FHWA’s approach, to identify both internal and external risks to an agency is another best practice for agencies. In particular, by targeting areas of risk at both the state and national level, FHWA can focus on specific program areas of concern and better utilize limited resources. Division offices conduct annual assessments of their states to identify the greatest risks and vulnerabilities, and FHWA headquarters uses this information to identify common risk areas across the nation. FHWA officials in several division offices we spoke with stated that they use the risk assessment to inform their oversight activities throughout the year and take specific steps, when applicable, to address the risks. In 2009, we reported that FHWA had improved its use of risk assessments by proactively identifying risks and their potential impact, as well as developing specific response strategies to inform its planned oversight activities. Our report concluded that FHWA’s guidance and training reflected best practices in risk management in three of four key areas. Random sampling to review documentation of various financial transactions—a practice that is in keeping with its risk-based approach. This approach ensures that FHWA can assess compliance with financial requirements in a systematic way when it is impossible for FHWA to survey all occurring financial transactions. Furthermore, selecting transactions randomly protects the sample from selection bias, to which FHWA division offices could potentially be vulnerable because of their partnering relationship with the state. For example, in its Financial Integrity Review and Evaluation system, FHWA headquarters selects a random sample of transactions for each division office to check for compliance with the Improper Payments Information Act of 2002. billing transactions each quarter to determine if there is sufficient documentation to support the billing item and amount. Pub. L. No. 107-300, 116 Stat. 2350 (2002). Legislation has been approved in the Senate that would move the federal- aid highway program toward a more performance-based approach. FHWA’s partnership—its close working relationship with the states— could be useful in making the transition to such a system; however, FHWA would need to effectively address the risks posed by such a close partnership—lax oversight and lack of independence. In addition, it would have to address other existing weaknesses that we have identified in previous reports, including improving the transportation planning process and data collection and evaluation. Finally, long-standing challenges stemming from the growth in the number of responsibilities and complexity within the federal-aid highway program, as well as the lack of well-defined federal goals and roles, would remain. Reexamining and refocusing surface transportation programs, which we have previously recommended, presents an opportunity to narrow the scope of FHWA’s responsibilities so that it is better equipped to transition to a performance- based system. This review identified areas where FHWA expends considerable time and resources but exercises little effective control— areas where devolving responsibilities to the states may be appropriate. A performance-based system is critical to the reexamination and restructuring of surface transportation programs that we and others have recommended. Currently, most highway grant funds are distributed through formulas that have only an indirect relationship to infrastructure needs and many have no relationship to outcomes or the performance of the grantees. Because funds are distributed without regard to performance, it is difficult to know whether federal spending is improving the performance of the nation’s highway infrastructure. Under Moving Ahead for Progress in the 21st Century (MAP-21)—FHWA would develop performance targets for minimum condition levels in two areas: (1) pavement on the Interstate and non-Interstate highways on the National Highway System and (2) bridges on the National Highway System. If a state did not to meet the minimum condition levels for 2 consecutive years, it would be required to commit a specific percentage of its federal-aid highway funding to the deficient area. For other areas, MAP-21 directs states to develop performance targets related to national priorities identified in the legislation, document these targets in their statewide transportation improvement programs, and link investment priorities to these targets. FHWA would have to (1) work with the states to develop performance goals that represent real improvements relative to the state’s current conditions and will improve the performance of the nation’s transportation system, and (2) monitor and measure states’ progress and take corrective action should states not meet performance targets. Legislation approved by the House Transportation and Infrastructure Committee and the President’s fiscal year 2013 budget proposal also refer to states developing performance measures and goals to improve safety, congestion, and other areas. FHWA’s partnership with states could offer several benefits in moving toward a performance-based program. In particular, through the partnership’s collaborative approach, FHWA could provide technical assistance to help states develop performance goals and targets and establish data collection methodologies to evaluate and track their progress. States participating in our discussion groups found the technical assistance, knowledge transfer, and policy advice that FHWA provides a highly valuable benefit of the partnership relationship. Likewise, FHWA division office personnel recognized the benefit partnership offers in facilitating technical assistance. Developing effective performance goals and targets and the data collection methods to track targets poses challenges that technical expertise can address. As we have reported, the more specific, measurable, achievable, and outcome-based the goals are, the better the foundation for allocating resources and optimizing results.must be linked to project selection and funding decisions, and without specific and measurable outcomes for federal involvement, policymakers will have difficulty determining whether certain parts of the federal-aid highway program are achieving the desired results. In addition, Also, goals developing data collection methods that consistently and reliably capture the metrics needed requires technology, planning, and training staff to ensure high-quality data. FHWA has recently developed performance metrics and revamped its data collection approach for the National Bridge Inventory System. Specifically, it adopted a new risk-based, data-driven approach that incorporates the review of 23 individual performance metrics, and, where appropriate, makes use of random sampling of the state’s bridges to evaluate the metrics. This recent experience, coupled with its technical expertise in other areas and division office officials’ relationship with, and knowledge of, their state, would help to facilitate knowledge transfer and have the potential to create an effective performance-based program. Moving to a more performance-based approach means monitoring and measuring states’ progress, holding states accountable for meeting performance targets, and taking corrective action objectively and consistently across states when needed. However, this can only be achieved if the risks posed by partnership discussed earlier—lax oversight, reluctance to take corrective action, and lack of independence in decision making—are overcome. In addressing the risks posed by its partnership, FHWA can draw on some of its existing organizational structures. For example, during the implementation of the Recovery Act, FHWA used its NRTs to augment the oversight provided by division offices by conducting additional programmatic reviews and project inspections. Officials stated that the NRTs were able to maintain their neutrality and objectivity while conducting oversight and making recommendations, and that NRT personnel provided a consistent, comparative perspective to the oversight regularly conducted by division offices. Officials also explained that the NRTs’ observations often reinforced those of division staff while also allowing the partnering relationship between the division offices and state DOTs to remain strong. Although FHWA would have to work with the states to develop performance goals and monitor and measure states’ progress, we have reported weaknesses in federal oversight of both the statewide and metropolitan area planning processes that prevent effective measurement and tracking of performance outcomes. For example, we found that FHWA’s oversight of statewide planning focuses on process, rather than specific transportation outcomes. As such, FHWA cannot assess whether states’ investment decisions are improving the condition and performance of the nation’s transportation system. Similarly, pursuant to federal law, federal oversight of metropolitan planning is process-oriented rather than outcome-oriented, making it difficult to determine whether this oversight was improving transportation planning. Specifically, FHWA’s oversight is geared toward determining whether MPOs are in compliance with federal laws and regulations, and this procedural focus, coupled with the fact that FHWA rarely withholds certification of MPOs, makes it difficult to use the certification process as a performance indicator for MPOs. In addition, we found that while FHWA identifies corrective actions to bring MPOs into compliance, it does not routinely assess the progress MPOs are making toward completing those corrective actions. We recommended to Congress that FHWA more closely review states’ transportation improvement programs to assess whether states’ investments are achieving intended outcomes, rather than limiting its evaluation to whether the state complied with federal processes for developing the plan. We also recommended that Congress make the metropolitan planning processes more performance-based in order for FHWA to better assess the MPOs’ progress in achieving results and better understand whether federal funds are being used to achieve national goals. approved a multi-year surface transportation reauthorization measure, which could potentially address these recommendations. GAO-09-868. the Government Performance and Results Act of 1993 requires agencies to measure performance toward the achievement of program goals and objectives, we have stated previously that the federal government is not equipped to implement a performance-based approach to transportation funding because it lacks comprehensive data. For example, during the administration of the Recovery Act we recommended that DOT assess the long-term benefits of Recovery Act investments in transportation infrastructure. In its response, DOT said it expected to be able to report on Recovery Act outputs, such as the miles of road paved, but not on outcomes, such as reductions in travel times. We have found other areas in which the lack of comprehensive, national- level data would hinder any move toward a performance-based system. For example, in administering the on-the-job training program, FHWA does not collect consistent national-level data on the number and demographics of program participants, the trades involved, and status of trainees. As a result, FHWA is not able to assess program results and hold states accountable. In another example, our review of statewide transportation planning found that while FHWA division offices were collecting data on progress made by states to advance projects on their statewide transportation improvement programs to their construction phase, the data were unreliable to the point of being unusable because they were collected inconsistently across states and could not be used to compare states’ progress. We have made numerous recommendations to DOT related to the need for national-level data—a number of which DOT has yet to implement. FHWA’s success in transitioning to a performance-based program is dependent not only on addressing risks posed by its partnering relationship, but also by factors it cannot control. A performance-based program represents new responsibilities at a time when the growth in the number and complexity of its responsibilities and the lack of well-defined federal goals and roles leave FHWA, to a large extent, with a broad mandate in an increasingly constrained budget environment. As we have reported previously, a performance-based system is one part of the broader need to reexamine and restructure the program. A clearer definition of the federal role and, in turn, FHWA’s responsibilities is under Congress’s purview, and therefore beyond FHWA’s or its partners’ ability to address. In 2008, we recommended that Congress consider a fundamental reexamination and reform of surface transportation programs that would potentially result in a more clearly defined federal role in relation to other levels of government and thus a more targeted federal role focused around evident national interests. For issues in which there is a strong national interest, ongoing federal financial support and direct federal involvement could help meet federal goals. Where national interests are less evident, other stakeholders could assume more responsibility, and some programs and activities may better be devolved to other levels of government. In some cases, it may be appropriate to “turn back” activities and programs to state and local governments if they are best suited to perform them. 23 U.S.C. § 106(c)(2). these projects, FHWA is still expected to evaluate state DOT capacity through a number of processes and reviews. Division office officials told us that these activities require a considerable amount of time and effort on the part of their staff. Locally administered projects are projects in which a state DOT has given approval to a local public agency (e.g., a city or county) the responsibility to administer a project or phase of a project such as design, property acquisition, or construction. These projects can either receive full oversight from FHWA or that responsibility can be assumed by the state. During our review, FHWA did not have national-level information on the number of projects or amount of federal funds spent on locally administered projects. However, it began requiring division offices to collect this information for newly authorized projects on March 12, 2012, so such data will be available in the future. Nevertheless, locally administered projects are ranked by FHWA’s risk assessments as among the highest-risk areas in FHWA’s oversight portfolio at both the state and national levels. For example, at least 33 division offices included risks related to locally administered projects as their most pressing risk areas in 2010. These risks included a lack of understanding of federal-aid construction contract requirements and use of innovative or nontraditional construction techniques by inexperienced local agencies. Likewise, FHWA headquarters identified locally administered projects as high risk. According to FHWA division and state officials, local agencies struggle to meet federal regulations that accompany federal-aid funding because of high staff turnover at the local level and the infrequency with which local agencies receive federal funding. These challenges were reiterated throughout the discussion groups we conducted, as well as at the site visits to FHWA division offices that we conducted across the country. FHWA officials from two states described a wide range of risks posed by locally administered projects, including use of outdated design standards, lack of quality control and assurance, lack of standard documentation and recordkeeping, and insufficient knowledge of the right-of-way acquisition requirements. One FHWA division office provided examples of locally administered projects in their state that did not conduct construction inspections or materials testing or bought supplies from foreign countries, actions that are out of compliance with federal regulations. According to officials in three of the FWHA division offices we visited, locally administered projects require considerable time, attention, and resources. For example, according to officials in three division offices, FHWA staff expend a good deal of time and effort providing technical assistance and capacity-building to enhance the ability of local agencies to successfully administer federal-aid projects. Further, our analysis of the 2010 state Single Audits showed that insufficient monitoring of subrecipients, such as on locally administered projects, was one of the most common findings and that 18 of 47 reporting states had findings related to monitoring of subrecipients. Our analysis also showed that state DOTs did not properly communicate federal requirements in their awards to or contracts with subrecipients and that their monitoring of subrecipients during the award was inadequate. Specifically, state DOTs’ monitoring of subrecipients for compliance with federal and state requirements lacked procedures for or had poor compliance with existing procedures for regular site visits, risk assessments, and performance reporting. As we have reported, devolving parts of the federal-aid highway program would have many implications and would require careful decisions to be made at the federal, state, and local levels. Since the federal-aid highway program has a dedicated source of funding (in that it is funded from fuel taxes and other fees deposited into the Highway Trust Fund), devolving parts of the highway program could entail reducing revenues into the Highway Trust Fund. The decision to reduce revenues at this time would be difficult because the Congressional Budget Office estimates, as of March 2012, that to maintain current spending levels plus inflation between 2013 and 2022, the Highway Trust Fund will require over $125 billion more than it is expected to take in over that period. At the federal level, it would need to be determined what functions would remain and how federal agencies would be structured and staffed to deliver those programs. At the state and local levels, it would need to be determined whether to replace federal revenues with state taxes and what types of programs to finance. Deciding whether to replace federal revenues with state taxes would be difficult because states also face fiscal challenges and replacing revenues would have different effects on different states. FHWA’s partnership approach with the states allows it to proactively identify issues before they become problems, achieve cost savings, and gain states’ commitment to improve their processes. In some areas, FHWA division offices have good oversight practices that complement its partnership, including using a risk-based approach to its oversight and using an independent, third- party review (the NRTs) to augment its oversight activities during the implementation of the Recovery Act. However, FHWA’s partnership also poses risks that it has not to date directly addressed, that can potentially result in improper or ineffective use of federal funds and the loss of independence necessary for effective oversight. Should Congress direct FHWA to move to a performance- based system, holding states accountable for achieving performance measures—and taking action when they do not—would be essential. Because of the nature of their partnership with the states, FHWA’s division offices may not be in the best position to mitigate partnership risks. Given that partnership produces benefits, the solution does not lie with eliminating FHWA’s partnership approach. Rather, a strategy built around leveraging the strengths of the partnership approach while managing its risks could provide a better way for FHWA to verify the states’ use of federal funds. While such a strategy could take many forms, greater separation of the responsibilities to advance, oversee, and make corrective action decisions in the program would be consistent with good internal control practices and may help FHWA transition to a performance-oriented program. Specifically, a nationally focused, independent oversight entity modeled on the NRTs could be an effective vehicle to mitigate risks associated with partnering between FHWA division offices and state DOTs by conducting periodic evaluations of selected activities and making recommendations for improvement. This could be particularly helpful in instances in which the division offices have been reluctant to take corrective action because of concerns about damaging the partnering relationship. In addition, if Congress directed FHWA to move to a performance-based system, an entity modeled on the NRTs could assess states’ progress toward performance measures and hold states accountable for meeting them. Responsibilities such as technical assistance and knowledge transfer—areas where FHWA’s partnering relationship can help states develop performance goals and targets— could remain with the division offices. Any successful transition to a performance-based system in the highway program requires accurate, reliable national-level data. The partnership that division offices have with state DOTs could help to ensure that states develop data collection methods that would help determine whether the highway system overall was improving. Furthermore, FHWA has the expertise to develop and implement a rigorous national-level data collection effort as it recently did with the National Bridge Inventory System. We are not making a new recommendation to DOT on this matter because many of our recommendations on collecting national-level data remain open. In 2008, we recommended that Congress consider reexamining and refocusing surface transportation programs, establishing well-defined goals with direct links to identified federal interests and roles, and consider devolving to the states and other levels of government responsibility for programs where national interests are less evident. The information we gathered during the course of this review and the pending transition to a more performance based federal-aid highway program reinforces the need to act. First, FHWA’s responsibilities have expanded over the years while its resources have not, and the addition of a performance-based system to its already broad mandate would further expand FHWA’s responsibilities. Reexamining and refocusing surface transportation programs presents an opportunity to narrow the scope of FHWA’s responsibilities so that it is better equipped to transition to a performance-based system. Second, this review has identified specific areas where devolving or turning back to the states the responsibilities for managing and funding some parts of the highway program may be appropriate. Turnback would have many implications and would require careful decisions. Yet nearly half of federal-aid highway funds are spent on roads off the National Highway System—projects for which oversight has been assumed by the states—raising questions about whether evident federal interests are at stake. In addition, the considerable federal resources FHWA expends overseeing locally administered projects— including capacity-building activities for city and county governments— raises questions about whether such time and effort is better spent in support of more nationally focused programs and objectives. As we have previously recommended, Congress should consider reexamining and refocusing surface transportation programs, including establishing well-defined goals with direct links to identified federal interests and roles. Based on this review, there may be areas where national interests are less evident and where Congress may wish to consider narrowing FHWA’s responsibilities. We recommend that the Secretary of Transportation direct the FHWA Administrator to develop a strategy based on the NRT model to mitigate the risks associated with its partnering approach with state DOTs, while maintaining the strengths that the partnership approach brings to the program. This strategy should address existing risks and, if Congress directs FHWA to move to a performance-based system, partnering risks that could affect the successful implementation of such a system. We obtained oral comments from DOT officials, including the Director of FHWA’s Office of Program Administration. These officials stated that DOT generally agreed with the findings and recommendations in the report. Specifically, they recognized that the agency’s partnership approach with the states poses oversight risks. They stated that they are implementing efforts based on the NRT model to provide independent reviews and accountability services to improve the efficiency and effectiveness of FHWA programs. We will monitor these efforts to assess if the department is responsive to our recommendation that DOT mitigate the risks of its partnership with the states.technical comments which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to congressional subcommittees with responsibilities for surface transportation issues and the Secretary of Transportation. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or herrp@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made significant contributions to this report are listed in appendix III. The Federal Highway Administration (FHWA) has 52 division offices— one in each state, the District of Columbia, and Puerto Rico—to carry out the day-to-day activities of the federal-aid highway program. These offices are generally located in the same city as the state departments of transportation (state DOTs), which is usually the state capital. In addition, jointly with the Federal Transit Administration, the FHWA operates four metropolitan offices in Philadelphia, Pennsylvania; New York, New York; Chicago, Illinois; and Los Angeles, California, which are extensions of their respective division offices. FHWA division offices are organized geographically under three directors of field service who provide administrative supervision and leadership on strategic initiatives to their constituent division offices. FHWA headquarters provides leadership and policy direction for the agency, and FHWA’s Resource Center, with five locations, provides technical expertise, guidance, and training to the states in areas such as air quality, civil rights, construction, environment, safety and bridges. To address our objectives, we reviewed and analyzed relevant laws, regulations, and FHWA documentation. Specifically, we reviewed previous and current authorizations of the federal-aid highway program, as well as proposed reauthorization language. We also reviewed relevant sections of U.S. Code that pertain to FHWA and its relationship with the states. Additionally, we reviewed and summarized past GAO work regarding FHWA’s programmatic oversight responsibilities and its relationship with the states. To obtain information on current FHWA structure and oversight activities, we conducted site visits to nine FHWA division offices, including Colorado; Delaware/Maryland, which has a joint office; Maine; Michigan; North Carolina; Oklahoma; Virginia; and Washington, D.C. We selected these states to include a range of selection criteria, including the extent to which oversight responsibilities have been assumed by the state, the size of the state’s transportation program and proportion of federal funding relative to state funds, the type of transportation system in the state (e.g., primarily rural highways and interstate or primarily infrastructure in densely populated urban areas); and geographic distribution. We interviewed FHWA and state DOT officials in multiple settings to learn how they characterize their relationship and what role partnership plays in oversight. We also reviewed selected academic literature on formal partnering practices and tools. To perform this analysis, GAO conducted a variety of literature and Internet searches, reviewed previous GAO reports, and analyzed literature recommended by engagement stakeholders. We read, analyzed, and synthesized these documents to construct a common definition of partnering, namely that “partnering is an approach that, through collaborative processes and activities, enables parties to achieve individual and mutually beneficial goals and results.” We also identified nine features of partnership from our review of the literature. Of those, we selected four features based on the applicability of these features to FHWA’s partnering relationship with state DOTs based on our observations of FHWA, interviews with FHWA and state DOTs, and from our review of documentation of formal partnering arrangements between states and FHWA division offices. In doing so, we developed the following definition of features of partnership: “Partnering processes and behaviors span a continuum of collaborative activities including information sharing, participative and consultative processes, collaborative problem solving, and formal team-building such as charter signing and relationship assessment.” Further, we reviewed literature to identify partnership risks, and after identifying a list of six risks, we identified two risks which were most evident in our audit work and that were most relevant to FHWA’s partnering relationship with state DOTs: lax oversight and a lack of independence emerged as two primary themes of partnering risk. To obtain an independent view of issues in FHWA’s oversight, we examined the results of the 2010 Single Audits—statewide audits of financial statements and compliance with federal program requirements for certain programs among recipients of federal funds. Forty-seven states reported their results in the Federal Audit Clearinghouse as of October 28, 2011. To determine relevant findings to our work, we identified audit records for FHWA funding categories, which provided us with funding amounts that were subject to audit findings as well as the types of audit findings. We analyzed these findings to determine the types of findings occurring most frequently. For subrecipient monitoring, one of the most frequent audit finding types, we examined full-text Single Audit reports, comparing them against each other to identify common themes. In addition to these efforts, we conducted a survey of all FHWA division administrators, who lead the FHWA division offices located in each state, as well as Washington, D.C., and Puerto Rico. With all 50 states and Washington, D.C., and Puerto Rico, our universe was 52 division offices. We developed a web-based survey instrument of seven closed-ended questions and one open-ended question, regarding (1) FHWA’s partnering relationship with state DOTs, and (2) FHWA’s use of available corrective actions. We pre-tested the instrument with two division administrators in November 2011. The survey was released in December 2011. We received 52 completed surveys, for a 100 percent response rate. To obtain input from states on their relationship with FHWA division offices and their oversight of the federal-aid highway program, we conducted four discussion groups of state DOT representatives. We worked in conjunction with the American Association of State Highway and Transportation Officials to speak with personnel from a variety of geographic locations and various programs, including personnel from the areas of construction, locally administered projects, engineering, bridges, and leadership. To determine the extent to which FHWA’s incorporation of partnering practices into its oversight approach supports effective oversight, we used academic literature and GAO reports to identify criteria and effective practices for productive partnering and robust oversight. Using these criteria and effective practices, we assessed FHWA’s current oversight practices by reviewing information from interviews with FHWA headquarters and division offices and state DOTs; site visit observations; and relevant findings from recent and ongoing GAO engagements examining various FHWA program areas. To determine the extent to which FHWA’s partnering approach serves as a foundation for moving toward a performance-based transportation program, we identified principles for a performance-based transportation system in previous GAO reports that can be applied to FHWA, including (1) national transportation goals, (2) performance measures, (3) appropriate performance targets, and (4) employing the best tools to emphasize return on investment. We also reviewed the proposed reauthorization bill, Moving Ahead for Progress in the 21st Century (MAP-21),performance-based system. to incorporate Congress’ expectations for moving toward a We conducted this performance audit from April 2011 to April 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, other key contributors to this report were Steve Cohen (Assistant Director), Joah Iannotta (Analyst-in- Charge), Irina Carnevale, Kathryn Crosby, Peter Del Toro, Holly Dye, Bert Japikse, Thomas James, Stuart Kaufmann, SaraAnn Moessbauer, Amy Rosewarne, and Jeffrey Sanders. | The U.S. Department of Transportation (DOT) provides about $40 billion to the states annually to build and maintain highways and bridges through the federal-aid highway program. While this program has grown and changed over time, the federal-state relationship has been consistently one of partnership since 1916. DOTs FHWA has offices in all 50 states that have developed close working relationships with states. Legislation approved by the Senate in March 2012 would establish a more performance-based highway program, introducing performance measures for highways and bridges and requiring FHWA to monitor states progress in meeting those measures. As requested, GAO examined (1) how the federal-aid highway program and FHWAs oversight have changed over time; (2) the extent to which FHWAs partnership approach produces benefits; (3) the extent to which FHWAs partnership approach poses risks; and (4) how FHWAs partnership with state DOTs could affect a transition toward a performance-based highway program. To do this work, GAO conducted site visits and a survey, reviewed relevant documentation, and interviewed FHWA and state officials. Over the years, the federal-aid highway program has expanded to encompass broader goals, more responsibilities, and a variety of approaches. As the program grew more complex, the Federal Highway Administrations (FHWA) oversight role also expanded, while its resources have not kept pace. As GAO has reported, this growth occurred without a well-defined overall vision of evident national interests and the federal role in achieving them. GAO has recommended Congress consider restructuring federal surface transportation programs, and for this and other reasons, funding surface transportation remains on GAOs high-risk list. FHWA benefits from using recognized partnership practices to advance the federal-aid highway program and conduct program oversightsuch as clear delineation of roles and responsibilities between FHWA and its state partners and formal and informal conflict resolutionthat are recognized as leading practices. FHWAs partnership approach allows it to proactively identify issues before they become problems, achieve cost savings, and gain states commitment to improve their processes. FHWAs partnership approach also poses risks. We observed cases where FHWA was lax in its oversight or reluctant to take corrective action to bring states back into compliance with federal requirements, potentially resulting in improper or ineffective use of federal funds. For example, while FHWA has made it a national priority to recoup funds from inactive highway projectsprojects that have not expended funds for over 1 yearFHWA officials in three states we visited were reluctant to do so because of concerns about harming their partnership with the state. In other cases, FHWA has shown a lack of independence in decisions, putting its partners interests above federal interests. For example, FHWA allowed two states to retain unused emergency relief allocations to fund new emergencies, despite FHWAs policy that these funds are made available to other states with potentially higher-priority emergencies. In some instances, FHWA became actively and closely involved in implementing solutions to state problemsthis can create a conflict when FHWA later must approve or review the effectiveness of those solutions. If proposals for a performance-based highway program are adopted, FHWA would have to work with states to develop measures and take corrective action if states do not meet them. FHWAs partnership could help states develop measures, but it would need to mitigate the risks posed by its partnership to ensure corrective action was effective when needed. The fundamental reexamination of surface transportation programs, including the highway program, that GAO previously recommended presents an opportunity to narrow FHWAs responsibilities so that it is better equipped to transition to a performance-based system. GAO identified areas where national interests may be less evident but where FHWA expends considerable time and resourcesareas where devolving responsibilities to the states may be appropriate. Congress should consider restructuring federal surface transportation programs. Based on GAOs review, there may be areas where national interests are less evident and where opportunities exist to narrow FHWAs responsibilities. Also, DOT should address the risks posed by its partnership approach. DOT generally agreed with the recommendation. |
Congress established the Smithsonian in 1846 to administer a large bequest left to the United States by James Smithson, an English scientist, for the purpose of establishing, in Washington, D.C., an institution “for the increase and diffusion of knowledge among men.” In accepting Smithson’s bequest on behalf of the nation, Congress pledged the “faith of the United States” to carry out the purpose of the trust. To that en act establishing the Smithsonian provided for the administration of the trust, independent of the government itself, by a Board of Regents a nd a Secretary, who were given broad discretion in the use of the trust funds. The Board of Regents currently consists of nine private citizens as well as members of all three branches of the federal government, including the Chief Justice of the United States, the Vice President, and six congressional members, three from the Senate and three from the House of Representatives. The three senators are appointed by the President of the Senate, the three representatives are appointed by the Speaker of the House, and nine citizens are appointed by joint resolution of Congress—two from the District of Columbia and seven from the states. Culture, authorized by Congress in 2003. Beyond this, in May 2008, Congress established a commission to study the potential creation of a National Museum of the American Latino and whether the museum should be located within the Smithsonian. In addition to its stewardship duties, the Board of Regents is vested with governing authorities over the Smithsonian. It considers matters such as the Smithsonian’s budgets and planning documents, new programs and construction proposals, appointments to Smithsonian advisory boards, and a variety of other issues facing the Smithsonian. Although the Smithsonian is a trust instrumentality of the United States with a private endowment, about two-thirds of its operating revenues in fiscal year 2008 came from federal appropriations. In fiscal year 2008, the Smithsonian’s operating revenues equaled about $1 billion, while its federal appropriations equaled about $678.4 million—$107.1 million for facilities capital, which provides funds for construction and revitalization projects, and $571.3 million for salaries and expenses, which includes funding for the program activities of each museum and research center, rents, utilities, and facilities’ operations, maintenance, and security costs. The Smithsonian’s fiscal year 2008 appropriation was subject to an across- the-board rescission of 1.56 percent, which according to the Smithsonian resulted in an appropriation of $105.4 million for facilities capital and $562.4 million for salaries and expenses. The remaining operating revenues came from the Smithsonian’s private trust funds. For fiscal year 2008, the Smithsonian was also appropriated an additional $15 million for facilities capital (reduced to $14.8 million by the rescission), referred to as the Legacy Fund, to be provided if the Smithsonian received matching private donations of at least $30 million; however, according to a Smithsonian official, the Smithsonian did not meet the matching donations requirement and therefore has not received these funds. In fiscal year 2009, the Smithsonian was appropriated $123 million for facilities capital and $593.4 million for salaries and expenses. The Smithsonian was also appropriated an additional $15 million for the Legacy Fund, with the same requirements as for fiscal year 2008, except that funds were made available for individual projects in incremental amounts as matching funds were raised. The Smithsonian was also appropriated an additional $25 million for facilities capital under the American Recovery and Reinvestment Act of 2009. In fiscal year 2010, the Smithsonian was appropriated $125 million for facilities capital and $636.16 million for salaries and expenses. Of the $30 million appropriated for the Legacy Fund in fiscal years 2008 and 2009, the approximately $29.8 million unobligated balance was rescinded, and $29.8 million was appropriated under a new requirement—the Legacy Fund is now directed to the Arts and Industries Building for the purpose of facilitating the reopening of this building. The Appropriations Act makes funds available in incremental amounts as private funding becomes available. Private donations, including major in-kind donations, that contribute significantly to the building’s reopening will be matched dollar for dollar. The Smithsonian has implemented 9 reforms recommended by the Board of Regents’ Governance Committee since May 2008—in addition to the 30 reforms it had implemented as of May 2008—bringing the total number of reforms implemented to 39 of 42 reforms. The Smithsonian has not completed implementation of 3 reforms—2 related to improving policies on broader Smithsonian operations (to develop a contracting policy and conduct a comprehensive review of financial reporting and internal controls) and one related to communication and stakeholder relationships (to enhance the role of the Smithsonian advisory boards). Figure 1 summarizes the status of the Smithsonian’s implementation of the Governance Committee’s recommended reforms as of May 2008 and December 1, 2009. As shown in Figure 1, the Smithsonian has implemented 9 Governance Committee reforms since May 2008, including the following: The Smithsonian (1) developed a database to identify potential conflicts of interest; (2) implemented a policy requiring the former Smithsonian Business Ventures (SBV)—now reorganized and renamed Smithsonian Enterprises—to follow Smithsonian policies except in the case of a few documented exceptions; (3) developed an event expense policy covering regent and other Smithsonian events; (4) completed a review of the Smithsonian’s internal controls for travel and expense reimbursement and implemented a number of additional accountability measures for travel and expense reimbursement; (5) held two regent annual public forums; (6) developed a Board orientation process; (7) completed a review and revision of the Board of Regents committees’ charters; (8) completed a review of appointment procedures to Board of Regents committees, which included clarifying the process for appointing nonregents to committees and making this process publicly available on the Smithsonian’s Web site; and (9) implemented a reform calling for a regular assessment of the Board, its committees, and its members. While the Smithsonian has made considerable progress in implementing the Governance Committee’s reforms, work remains on 3 reforms recommended by the Governance Committee: 2 related to policies on broader Smithsonian operations and 1 related to communication and stakeholder relationships. According to Smithsonian officials, generally, the Board of Regents is responsible for setting the policies, and the Smithsonian administration is responsible for implementing those policies. While the Board of Regents has approved policies or plans related to the 2 policy-related reforms, the Smithsonian has not completed its implementation of these reforms. In our May 2008 report, we raised concerns about challenges associated with these efforts, stating that effectively implementing the new policies and procedures developed during these reviews was likely to depend on effectively training staff and establishing accountability, both of which could be challenging because of a level of standardization and requirements that did not previously exist. The following provides a brief summary of the Smithsonian’s efforts regarding these reforms: Operational policies—contracting: The Smithsonian has taken steps toward but not fully implemented the governance reform related to improving contracting policies and procedures. The Smithsonian has issued a new contracting policy and is currently writing formal procurement and contracting procedure manuals that implement this policy and provide the rules and procedures for day-to-day procurement and contracting activities. According to the Smithsonian’s Chief Financial Officer (CFO), two of seven parts of the manual are completed and in use and the rest are scheduled to be completed by the end of fiscal year 2010. Completing these manuals is important because a lack of agency-specific policies and procedures can result in an increased risk of improper or wasteful contract payments. Operational policies—financial reporting and internal controls: The Smithsonian has taken steps to implement its reform to conduct a comprehensive review of the Smithsonian’s financial reporting and internal controls. The Smithsonian conducted an initial review of financial reporting and internal controls which led to a plan—approved by the Audit and Review Committee in March 2009—to reduce the risk level of five processes identified by the Smithsonian as high risk by the end of fiscal year 2012. The work laid out in the plan for accomplishing this goal includes such tasks as writing new policies and procedures, training staff on responsibilities and procedures for which they are accountable, and testing and validating controls through policy compliance reviews or personal property inventories. The CFO reported to the Audit and Review Committee that effective execution of the plan will require a commitment to increasing staffing and other resources over time. During the discussion with the CFO, members of the Audit and Review committee expressed concern that providing these resources may be challenging for the Smithsonian, given limited available resources and other priorities, such as collections care and research. Communication and stakeholder relationships—role of advisory boards: The Smithsonian has taken steps to implement its reform to enhance the role of its 30 advisory boards, which include a national advisory board as well as advisory boards that focus on individual museums, research centers, or programs, but has not resolved all issues. The primary purpose of advisory boards is to provide advice, support, and expertise to the directors of museums, research centers, and programs, as well as to the Board of Regents and Secretary. We discuss the Smithsonian’s efforts regarding this reform later in this testimony, when we discuss the Smithsonian’s actions toward implementing our related May 2008 recommendation. For more information on the Smithsonian’s efforts related to these reforms, see our newly issued report on this subject. The Smithsonian has implemented one of the four recommendations we made in 2008 to strengthen its governance reform efforts, and it has taken steps to implement the other three recommendations. (See fig. 2.) Assessment—actions in the event of persistent neglect of duties: The Smithsonian implemented GAO’s recommendation to evaluate what actions it can take in the event of persistent neglect of duties by a regent. In July 2009, the Board of Regents Governance and Nominating Committee implemented this recommendation by considering a staff paper that described actions that could be taken in the event of persistent neglect of duties, and approving an approach that included initial counseling and potential referral to the full Board of Regents for appropriate action. Structure and composition: The Board of Regents has not fully implemented GAO’s recommendation to develop and make public its process for the selection, use, and evaluation of nonregents. The Board of Regents implemented part of the recommendation by posting on its Web site the process for selecting nonregent committee members. However, the Board of Regents did not make a final decision regarding the use of nonregents on committees when in July 2009, its Governance and Nominating Committee tabled a proposed bylaw to give nonregent members of committees the same roles and responsibilities as regents. Committee members cited issues such as the lack of statutory authority of nonregent committee members and uncertainty over whether certain ethical and disclosure obligations of regents should apply to nonregent committee members, and requested that Smithsonian staff provide the regents with further information on potential implications of this bylaw. According to the chief of staff to the Board of Regents, subsequently, the Smithsonian concluded that existing governance requirements in committee charters require that all committee members, including nonregent members, file annual financial disclosures, and the Smithsonian plans to apply this requirement to these individuals. The Board of Regents official also stated that the Governance and Nominating Committee plans to further discuss this issue at its March 2010 meeting. Communication and stakeholder relationships: The Board of Regents took steps to improve its relationship with stakeholders, including advisory boards. For example, the Chair of the Board of Regents now sends a quarterly email to all advisory board chairs providing information on the most recent Board of Regents’ meeting and asking to be contacted directly with any concerns. According to a Smithsonian official, when a concern is brought to the Chair’s attention, it is either responded to immediately or tracked by the Office of the Board of Regents until responded to. In addition, the Smithsonian conducted a workshop of advisory board chairs in April 2009 as part of its strategic planning process, which was organized through the regents and led by the Chair of the Board of Regents and the Secretary of the Smithsonian. According to a Smithsonian official, the input provided by these advisory board chairs was considered as the strategic plan was developed. However, due to limitations of the efforts thus far—such as their informal nature and focus on dissemination of information from the regents rather than two-way communication—several advisory board chairs with whom we spoke expressed concern that the Board of Regents still lacked a sufficient understanding of Smithsonian museums and other entities to govern as effectively as possible. Assessment—evaluation: The Board of Regents has not yet conducted a comprehensive evaluation of its reforms but plans to do so in fiscal year 2010. For more information on the Smithsonian’s efforts related to these recommendations, see our newly issued report on this subject. Both the Smithsonian and the Board of Regents concurred with the findings of that report. The Smithsonian has fully implemented four of the five recommendations we made in our September 2007 report on the Smithsonian’s facilities, security, and funding challenges. It has not implemented the fifth recommendation regarding submitting a report to Congress and the Office of Management and Budget (OMB) on its funding strategy, but plans to do so. (See fig. 3.) Furthermore, although the Smithsonian has implemented our recommendation to more comprehensively analyze funding strategies to meet the needs of its facilities projects and is planning to launch a national fundraising campaign, it is unclear what amount of funds will be raised through such a campaign and, more specifically, what amount will be dedicated to facilities. In September 2007, we found that the Smithsonian faced challenges related to communicating security-related information to museum and facility directors and omitted private funds from its capital plan, reducing stakeholders’ ability to comprehensively assess the funding and scope of facilities projects. We also found the Smithsonian did not have a viable strategy to address its growing cost estimate for facilities projects, increasing the risks faced by its facilities and collections, and likely decreasing its ability to meet its mission. Security of facilities—communicating information on security staff levels and all-hazards risk assessment: The Smithsonian implemented our recommendations to communicate information to museum and facility directors on (1) daily security staff levels and (2) its all-hazards risk assessment. Planning of capital projects—capital plan: The Smithsonian implemented our recommendation to include the full scope of planned projects and information on planned funding sources—federal and private funds—for each project in its capital plan. In September 2008, the Smithsonian created a facilities capital plan for fiscal years 2008 through 2017 that includes a description of planned projects and their funding sources. Funding of capital projects—analyzing funding strategies: The Smithsonian implemented our recommendation to analyze nonfederal funding strategies in a more comprehensive manner. In November 2007, the Board of Regents concurred with a more comprehensively analyzed and prioritized list of nonfederal funding strategies, which included establishing a national campaign to raise private sector funds for Smithsonian programs and facilities, among other strategies. According to Smithsonian officials, in the wake of the Board of Regents’ September 2009 approval of a new strategic plan for the Smithsonian, the Board of Regents Advancement Committee is developing a plan for a national fundraising campaign in concert with Smithsonian staff, who, among other things, are determining what staff resources are necessary and are coordinating with the Smithsonian museums, programs, and other entities on goals for the plan. The Board of Regents Advancement Committee expects to approve a full national fundraising campaign plan no later than September 2010. While these steps implement our recommendation, it is unclear at this time how much in funds will be raised and, more specifically, what amount will be dedicated to facilities. Funding of capital projects—reporting to Congress and OMB: According to a Smithsonian official, the Smithsonian has not submitted a report to Congress and OMB on its fundraising efforts but plans to do so in the future as part of its communications strategy related to the national fundraising campaign. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information about this statement, please contact Mark L. Goldstein, Director, Physical Infrastructure Issues, at (202) 512-2834 or at goldsteinm@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement include David Sausville (Assistant Director), Brian Hartman, Susan Michal-Smith, Alwynne Wilbur, and Carrie Wilks. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Smithsonian Institution (Smithsonian) is the world's largest museum complex. Its funding comes from its own private trust fund assets and federal appropriations. The Smithsonian Board of Regents, the Smithsonian's governing body, is responsible for the long-term stewardship of the Smithsonian. In recent years, GAO and others have documented (1) significant governance and accountability breakdowns at the Smithsonian, which could ultimately put funding and the organization's credibility at risk, and (2) the deterioration of the Smithsonian's facilities and the threat this deterioration poses to the Smithsonian's collections. This testimony discusses (1) the Smithsonian's status in implementing governance reforms recommended by its Governance Committee and by GAO in a 2008 report (GAO-08-632)--as discussed in a GAO report being released today (GAO-10-190R)--and (2) the Smithsonian's progress in implementing facilities and funding recommendations GAO made in a 2007 report (GAO-07-1127). The work for this testimony is based on GAO-10-190R and an analysis of documentary and testimonial evidence from Smithsonian officials. GAO is not making recommendations in this testimony and did not make new recommendations in GAO-10-190R . The Smithsonian and the Board of Regents concurred with the findings of GAO-10-190R . Since May 2008 the Smithsonian has implemented 9 reforms recommended by the Board of Regents Governance Committee--in addition to the 30 it had implemented prior to May 2008--and 1 of 4 GAO recommendations, but work remains on 3 reforms and 3 recommendations. The 9 Governance Committee reforms implemented since May 2008 include efforts such as revising policies related to travel and expense reimbursement and event expenses, creating a regents' annual public forum, completing a review and revision of Board of Regents committees' charters, and developing an assessment process for the Board of Regents. The Smithsonian has not completed implementation of 3 Governance Committee reforms related to the Smithsonian's contracting policy, a comprehensive review of financial reporting and internal controls, and enhancing the role of advisory boards. Regarding GAO's May 2008 recommendations, the Smithsonian implemented GAO's recommendation to evaluate what actions it can take in the event of persistent neglect of duties by a regent, but has not completed implementation of the following three recommendations: (1) The Board of Regents has not fully implemented GAO's recommendation to develop and make public its process for the selection, use, and evaluation of nonregents. The Board of Regents posted on its Web site the process for selecting nonregent committee members but did not make a final decision regarding a proposed bylaw to give nonregent members of committees the same roles and responsibilities as regents. (2) The Board of Regents took steps to improve its relationship with stakeholders, including advisory boards. However, because of limitations of the efforts thus far--such as the informal nature of the Board of Regents' efforts and their focus on the dissemination of information from the regents rather than two-way communication--several advisory board chairs with whom GAO spoke expressed concern that the Board of Regents still lacked a sufficient understanding of Smithsonian museums and other entities to govern as effectively as possible. (3) The Board of Regents has not yet conducted a comprehensive evaluation of its reforms but plans to do so in fiscal year 2010. The Smithsonian has implemented four of GAO's five 2007 recommendations related to facilities and funding. These include recommendations related to improving the Smithsonian's security communications and the comprehensiveness of its capital plan. Furthermore, the Smithsonian has implemented GAO's recommendation to more comprehensively analyze nonfederal funding options to meet the needs of its facilities projects. The Smithsonian is planning to launch a national fund-raising campaign to raise private sector funds for its programs and facilities. It is unclear how much in funds will be raised or dedicated to facilities through such a campaign. The Smithsonian has not implemented GAO's recommendation to submit a report to Congress and the Office of Management and Budget on its funding strategy, but it plans to do so as part of its national fund-raising campaign. |
Our work over the past several years has demonstrated that improper payments are a long-standing, widespread, and significant problem in the federal government. IPIA has increased visibility over improper payments by requiring executive branch agency heads to identify programs and activities susceptible to significant improper payments, estimate amounts improperly paid, and report on the amounts of improper payments and their actions to reduce them. Similarly, the Recovery Auditing Act provides an impetus for applicable agencies to systematically identify and recover contract overpayments. As the steward of taxpayer dollars, the federal government is accountable for how its agencies and grantees spend hundreds of billions of taxpayer dollars and is responsible for safeguarding those funds against improper payments as well as having mechanisms in place to recoup those funds when improper payments occur. IPIA was enacted in November 2002 with the major objective of enhancing the accuracy and integrity of federal payments. IPIA requires executive branch agency heads to review their programs and activities annually and identify those that may be susceptible to significant improper payments. For each program and activity agencies identify as susceptible, the act requires them to estimate the annual amount of improper payments and to submit those estimates to the Congress. The act further requires that for programs for which estimated improper payments exceed $10 million, agencies are to report annually to the Congress on the actions they are taking to reduce those payments. The act also requires the Director of OMB to prescribe guidance for agencies to use in implementing IPIA. OMB issued implementing guidance which requires the use of a systematic method for the annual review and identification of programs and activities that are susceptible to significant improper payments. The guidance defines significant improper payments as those in any particular program that exceed both 2.5 percent of program payments and $10 million annually. It requires agencies to estimate improper payments annually using statistically valid techniques for each susceptible program or activity. For those agency programs determined to be susceptible to significant improper payments and with estimated annual improper payments greater than $10 million, IPIA and related OMB guidance require each agency to annually report the results of its efforts to reduce improper payments. OMB has stated that having high-quality risk assessments is critical to meeting the objectives of identifying improper payments and is essential for performing corrective actions to eliminate payment errors. Figure 1 provides an overview of the four key steps OMB requires agencies to perform in meeting the improper payment reporting requirements. In addition, under certain conditions, applicable agencies are required to report on their efforts to recover improper payments made to contractors under section 831 of the National Defense Authorization Act for Fiscal Year 2002, commonly known as the Recovery Auditing Act. This legislation contains a provision that requires executive branch agencies entering into contracts with a total value exceeding $500 million in a fiscal year to have cost-effective programs for identifying errors in paying contractors and for recovering amounts erroneously paid. The act further states that a required element of such a program is the use of recovery audits and recovery activities. The law authorizes federal agencies to retain recovered funds to cover actual administrative costs as well as to pay other contractors, such as collection agencies. Agencies that are required to undertake recovery audit programs were directed by OMB to provide annual reports on their recovery audit efforts, along with improper payment reporting details, in an appendix to their PARs. In August 2006, OMB revised its IPIA implementing guidance. The revision consolidates into Appendix C of OMB Circular No. A-123, Management’s Responsibility for Internal Control, all guidance for improper payments and recovery auditing reporting. While inconsistent with the language in IPIA, the revised guidance allows for risk assessments to be conducted less often than annually for programs where improper payment baselines are already established, are in the process of being measured, or are scheduled to be measured by an established date. Although OMB kept its criteria for defining significant improper payments as those exceeding both 2.5 percent of program payments and $10 million, OMB added that it may determine on a case-by-case basis that certain programs that do not meet the threshold may be subject to the annual reporting requirement. Additionally, the revised guidance allows agencies to use alternative sampling methodologies and requires agencies to report on and provide a justification for using these methodologies in their PARs. This revised guidance is effective for agencies’ fiscal year 2006 improper payment estimating and reporting in the PARs or annual reports. Other OMB guidance states that agencies must describe their corrective actions for reducing the estimate rate and amount of improper payments. Related to corrective actions, OMB’s implementing guidance for IPIA requires that agencies implement a plan to reduce erroneous payments, including identifying the following. Root causes—For all programs and activities with erroneous payments exceeding $10 million, agencies shall identify the reasons their programs and activities are at risk of erroneous payments and put in place a corrective action plan to reduce erroneous payments. Reduction targets—Targets are necessary for future improper payment levels and a timeline within which the targets will be reached. Accountability—Ensure that their managers and accountable officers (including the agency head) are held accountable for reducing improper payments. Agencies shall assess whether they have the information systems and other infrastructure needed to reduce improper payments to minimal cost-effective levels, and identify any statutory or regulatory barriers that may limit agencies’ corrective actions in reducing improper payments. OMB has also established Eliminating Improper Payments as a program- specific initiative under the President’s Management Agenda (PMA). This separate PMA program initiative began in the first quarter of fiscal year 2005. Previously, agency efforts related to improper payments were tracked along with other financial management activities as part of the Improving Financial Performance initiative of the PMA. The objective of establishing a separate initiative for improper payments was to ensure that agency managers are held accountable for meeting the goals of IPIA and are therefore dedicating the necessary attention and resources to meeting IPIA requirements. This program initiative establishes an accountability framework for ensuring that federal agencies initiate all necessary financial management improvements for addressing this significant and widespread problem. Specifically, agencies are to measure their improper payments annually, develop improvement targets and corrective actions, and track the results annually to ensure the corrective actions are effective. While DHS has taken actions over the last 3 fiscal years to implement IPIA requirements, much more work needs to be done. In each of the last 3 fiscal years, DHS was unable to perform risk assessments for all of its programs and activities—the first step of IPIA implementation. This and other issues, such as concerns about program identification and testwork performed, contributed to DHS’s reported noncompliance with IPIA over the last 3 fiscal years. Until DHS is able to fully assess its programs, the potential magnitude of improper payments cannot be estimated. For fiscal year 2006, DHS did not perform risk assessments on programs with $13 billion of its $29 billion of payments subject to IPIA. Over $6 billion of this amount related to payments for grant programs. Performing risk assessments of grant programs and testing grant payments can be difficult because of the many layers of grant recipients, as well as the type of recipients and number of grant programs. However, developing a plan to assess risk and potentially test grant payments is important because of financial management weaknesses reported at DHS grantees and concerns about DHS’s grants management process. Developing a plan will also allow DHS to gain an understanding of its risk with respect to grant payments and potentially reduce future improper payments. To comply with the requirements of IPIA and related guidance from OMB, DHS initiated a plan in fiscal year 2004 to reduce its susceptibility to issuing improper payments by having each of its organizational elements complete a risk assessment of major programs by assigning each one an overall risk score. Based on this assessment, none of DHS’s programs were found to be high risk; however, DHS’s independent auditor reported that the agency was not in compliance with IPIA mainly because it had not yet instituted a systematic method of reviewing all programs and identifying those it believed were susceptible to significant erroneous payments. In fiscal year 2005, the auditor again reported noncompliance issues regarding the adequacy of the agency’s risk assessments. Based on DHS’s guidance, each component selected its largest program and completed statistical testing. DHS regarded this quantitative selection as its risk assessment process and did not incorporate qualitative factors. As with fiscal year 2004, DHS reported that it did not identify any programs or activities as being susceptible to significant improper payments and its auditors again reported that DHS was not in compliance with IPIA. The DHS OCFO worked with components during fiscal year 2006 to continue to refine the population of improper payment programs by having the components group Treasury Appropriation Fund Symbols (TAFS) into logical, recognizable programs. After identifying the population of disbursements for fiscal year 2006 IPIA testing, DHS components provided the necessary payment data to a contractor with expertise in statistical testing. The contractor constructed stratified sampling plans and samples for DHS components to perform IPIA testing for DHS’s risk assessment process. This testing was expanded from fiscal year 2005 to include, based on DHS’s revised guidance, all DHS programs issuing more than $100 million of IPIA-relevant payments. Two programs were found to be high risk. However, despite these efforts, DHS’s independent auditor found that the agency was still not in compliance with IPIA as reported in its fiscal year 2006 PAR, primarily because not all programs subject to IPIA were tested, and the population of disbursements tested for some programs was not complete. Appendix III contains additional information about DHS’s prior year IPIA PAR reporting and compliance issues reported by its independent auditor. Although DHS made progress in identifying its programs in fiscal year 2006, the agency did not perform a risk assessment for all programs and activities—covering approximately $13 billion of its more than $29 billion in disbursements subject to IPIA. According to DHS, this was primarily due to a lack of resources, guidance, and experience in performing this work. This was a major factor in the independent auditors’ finding that DHS was noncompliant with IPIA for fiscal year 2006. DHS performed risk assessments (step 1) for programs accounting for approximately $16 billion of the $29 billion in disbursements subject to IPIA review. Of this $16 billion covered by risk assessments, approximately $7 billion related to FEMA’s disaster relief programs that were found to be at high risk for issuing significant improper payments and therefore steps 2 through 4 were completed to estimate improper payments, develop a plan to reduce improper payments, and report this information. This testing resulted in estimated improper payments issued by FEMA from September 2005 through March 2006 of $450 million (8.56 percent) of IHP assistance payments and $319 million (7.44 percent) of disaster-related vendor payments. Although the necessary IPIA work—steps 1 and 2— was completed for the two DHS high-risk programs, the time period covered for testing and reporting (i.e., September 2005 through March 2006) was not in accordance with OMB’s implementing guidance, also contributing to DHS’s reported noncompliance with IPIA. The remaining programs with disbursements totaling $9 billion in disbursements were not found to be at risk for issuing significant improper payments and therefore DHS did not report improper payments for these programs. For some of its nondisaster programs, DHS performed statistical sample testing for those programs with disbursements greater than $100 million, without first performing a qualitative risk assessment such as an assessment of internal controls, oversight and monitoring activities, and results from external audits. While this approach is perhaps better than not doing any assessment, DHS officials concurred that it could be considered an inefficient use of resources, if a program is not at high risk. Table 1 shows DHS’s population of programs identified for IPIA testing and the status of DHS’s IPIA risk assessment process performed in fiscal year 2006. Since DHS did not perform the required first step—a risk assessment—on programs with approximately $13 billion of its more than $29 billion in disbursements subject to IPIA, it is unknown whether these programs are at high risk for issuing improper payments. DHS encountered challenges implementing IPIA for the programs with $13 billion of disbursements for which no risk assessment or testing was performed in fiscal year 2006. Over $6 billion of this amount related to payments for grant programs. The remaining $7 billion related primarily to FEMA nondisaster programs and TSA programs not categorized as grant or nongrant programs, and USCG operating expenses. DHS’s grant programs include the NFIP, which had disbursements of over $3 billion that should have been included in DHS’s IPIA population for review in fiscal year 2006. As we have previously reported, measuring improper payments and designing and implementing actions to reduce or eliminate them are not simple tasks, particularly for grant programs that rely on quality administration efforts at the state level. DHS has an even greater challenge in the diversity of recipients for its grants which include state and local governments, individuals, and other entities. During fiscal year 2006, DHS awarded grants to over 5 million recipients for 70 different grant programs, including state and local governments, nonprofits, and other entities and individuals. Although disbursements made related to these grants are subject to IPIA, as DHS has noted, performing risk assessments of grant programs and testing grant payments are difficult because of the many layers of grant recipients, as well as the type of recipients and number of grant programs. Developing a plan to assess risk and potentially test grant payments is important because of noted financial management weaknesses of DHS grantees. For example, DHS’s independent auditors and the DHS OIG have reported grants management weaknesses in part because the agency did not adequately follow up on audit findings pertaining to grantees’ potential improper payments. In addition, the DHS OIG identified grants management as a major management challenge facing the department. We have also identified the NFIP as a high-risk program. A list of DHS’s grant programs is presented in appendix IV. Appendix IV also shows the primary types of recipients and fiscal year 2006 award information for each grant program, as well as the component that administers the program. Given the identified weaknesses and the high-dollar amount, as well as the inherent risk associated with grant programs, it is important for DHS to assess grant programs for susceptibility to significant improper payments in accordance with IPIA. Assessing and, if necessary, testing these grant programs will allow DHS to gain an understanding of its risk in this area related to improper payments and potentially reduce future improper payments. During fiscal year 2006, DHS completed a risk assessment by performing sample testing for grants administered by the Transportation Security Administration (TSA) with disbursements of about $343 million; however, the department was unable to perform an assessment of its grants programs administered by the Office of Grants and Training (GT). Of the approximately $13 billion for which DHS did not perform a risk assessment, over $3 billion related to grant programs administered by GT. In addition to the NFIP, FEMA also administers other grant programs which, with the exception of IHP, were not tested during fiscal year 2006. DHS identified three IPIA programs within GT, including Domestic Preparedness, State and Local Programs, and Firefighter Assistance Grants, totaling $3.1 billion of fiscal year 2005 disbursements for fiscal year 2006 IPIA testing; however, GT did not perform an assessment or complete statistical sample testing on these grants programs. In its fiscal year 2006 PAR, DHS reported that one complication that was not overcome was how to extend statistical sample testing to grant recipients. DHS also had difficulty testing its grant programs because of the large number of grant programs identified for testing based on DHS’s guidance for fiscal year 2006 program identification and risk assessment methodology, which required that all programs with total disbursements exceeding $100 million be selected and statistically tested. DHS reported that one of the problems with its fiscal year 2006 IPIA methodology was that its risk assessments were based on strictly quantitative factors, instead of both qualitative and quantitative factors. Although OMB has not yet provided guidance as we have previously recommended, DHS issued internal guidance recognizing the need to consider qualitative factors. One such qualitative factor that DHS could consider as part of its risk assessment process are the results of Single Audit Act, as amended, reports related to its grantees. During fiscal year 2006, DHS’s independent auditors reported that the agency was not in compliance with the Single Audit Act. According to the independent auditors’ report, FEMA and TSA are required to comply with certain provisions of OMB Circular No. A-133, which requires agencies awarding grants to ensure they receive grantee reports timely and to follow-up on grantee single audit findings. Although certain procedures have been implemented to monitor grantees and their audit findings, the auditors noted that DHS did not have procedures in place to comply with these provisions in OMB Circular No. A-133 and follow up on questioned costs and other matters identified in these reports. TSA has developed a corrective action plan to establish a new system and processes to track and review single audit reports, but FEMA has not completely developed its corrective action plans due to the previously mentioned organizational changes during fiscal year 2007. We identified 37 DHS grantees—with awards totaling $2.1 billion—that had single audit findings related to questioned costs for fiscal year 2005. Some examples of questioned costs described in audit reports follow. One single audit report questioned $353,000 in unallowable charges for salaries and benefits due to a lack of adequate documentation. One grantee had expenditures that did not have appropriate supporting documentation, with the questioned amount totaling almost $80,000. Another grantee had costs of about $72,000 that were improperly charged to the grant program. A third grantee over-claimed reimbursement amounts of about $4,000. The DHS OIG also conducts audits relating to the programs and operations of DHS, including grant programs. The DHS OIG reviews several factors to determine which activities to audit, including current or potential dollar magnitude, and reports or allegations of impropriety or problems in implementing the programs. The objectives of these grant program audits include determining whether the grantee accounted for and expended funds according to federal regulations and DHS guidelines. For certain grantees, the DHS OIG has found questioned costs such as excessive charges, duplicate payments, ineligible contractor costs, unsupported contractor and labor costs, and other expenditures. The following are examples of DHS OIG findings from fiscal years 2005 through 2007. The DHS OIG found that one particular grantee had questioned costs of more than $1.8 million. The DHS OIG has also found instances where the grantee did not follow all federal procurement standards or DHS guidelines in awarding contracts, and needed improvements in procedures to make payments to subgrantees. One instance involved awarding contracts totaling more than $14 million and another instance involved more than $8 million in contract work. In an effort to address the agency’s noncompliance with the Single Audit Act, as amended, DHS’s Office of Grant Policy and Oversight (GPO) told us that it instituted an informal oversight process for single audits during fiscal year 2007 and is in the process of developing formal procedures. According to GPO, the development of this process is an attempt to address some of the grants management concerns that have been identified at DHS by its auditors and the DHS OIG. This monitoring process will help DHS to focus on audit findings at grantees and could help DHS with performing a risk assessment over grant programs for IPIA purposes by providing qualitative criteria. DHS has taken steps to address IPIA requirements, but the agency does not plan to be compliant in fiscal year 2007 and will likely not be compliant in fiscal year 2008. During fiscal year 2007, DHS prepared, and continues to refine, a departmentwide corrective action plan to address internal control weaknesses and noncompliance issues, including IPIA; however, the agency continues to encounter challenges in developing a plan to fully perform a risk assessment process. DHS used this corrective action plan to update its guidance and, according to DHS officials, the agency plans to focus on program identification and risk assessments during fiscal year 2007. Although DHS does not expect to be compliant in fiscal year 2007, focusing on these areas will help the agency build a solid foundation for its IPIA program. In addition to its overall corrective action plan to comply with IPIA, DHS, as required by IPIA and related OMB implementing guidance, has developed plans to reduce improper payments related to the two high-risk programs it has identified thus far. These plans include reducing manual processing, improving system interfaces, and clarifying roles and responsibilities. If properly executed, these plans should help reduce future improper payments in these programs by strengthening internal controls. With regard to system improvements, as we have previously recommended, DHS needs to conduct effective testing to provide reasonable assurance that the system will function in a disaster recovery environment. DHS has developed a corrective action plan to address the findings of its independent auditor, including its noncompliance with IPIA. In its most recent audit report for fiscal year 2006, the auditor recommended that DHS follow OMB guidance to complete the necessary susceptibility assessments, perform testwork over all material programs, and institute sampling techniques to allow for statistical projection of the results of its improper payments testing. In its IPIA corrective action plan, DHS documented the root causes that it believes have resulted in its noncompliance, and analyzed the key success factors, key performance measures, verification and validation procedures, risks, impediments, dependencies with other corrective actions, resources required, and critical milestones needed to become compliant with IPIA; however, implementation will take significant time and effort. DHS cited its lack of resources, guidance, and experience with IPIA to execute risk assessments as root causes for its noncompliance with IPIA. The corrective action plan identified the following items related to IPIA, including root causes. DHS also identified critical milestones in its corrective action plan for IPIA compliance, including due dates and status. However, these efforts remain ongoing and DHS has already missed some milestones. For example, while DHS initially planned for each component to identify its IPIA programs and disbursement populations by January 2007, this milestone was delayed until June 2007. As of July 8, 2007, according to DHS, the agency was waiting for one component to submit its list of programs, and DHS was in the process of reviewing submissions from the other components. Because of such delays, DHS does not expect to be in compliance with IPIA in fiscal year 2007 and will likely be noncompliant in fiscal year 2008. DHS’s updated critical milestones as of June 7, 2007, related to fiscal year 2007 are presented in table 3. DHS’s planning and assessment process to develop its IPIA corrective action plan enabled the agency to update its guidance for its components and, according to DHS, the agency plans to focus on program identification and risk assessments during fiscal year 2007. Strengthening risk assessments and identifying potential improper payments are also important in order for DHS to begin taking steps to reduce improper payments and ultimately improve the integrity of the payments it makes. According to DHS officials, the department has been working in close consultation with OMB, sharing guidance documents, program test plans and results, and recovery audit status reports. Regardless of whether DHS is able to fully complete these efforts in fiscal year 2007, focusing on these areas will help the agency build a solid foundation for a sustainable IPIA program. The updated guidance was issued in May 2007 and is to be in effect for fiscal year 2007 reporting. In this revised guidance, DHS clarifies how its components should identify their population of programs. In addition, DHS requires its components to perform a comprehensive risk assessment in order to identify programs susceptible to significant improper payments. DHS has designed a detailed methodology to conduct the IPIA risk assessment, and this methodology is outlined in the May 2007 guidance. The methodology, which includes qualitative criteria, as we have previously discussed, involves the creation of a program risk matrix based upon specific risk elements that affect the likelihood of improper payments. Further, the guidance states that a program may be selected for testing even if it does not meet the quantitative or qualitative assessments, noting that it is entirely possible that the risk assessment process may not identify a program as high risk, but component management may believe a program is high risk due to a high-level public profile or known financial or regulatory issues (such as a high-profile contract). For those programs found to be at high risk for issuing improper payments, the guidance also provides instructions for estimating improper payments, implementing a plan to reduce improper payments, and reporting on this information. Each of these procedures outlined in the May 2007 guidance includes instructions to submit information or documentation to the Internal Controls over Financial Reporting (ICOFR) Program Management Office (PMO). DHS’s May 2007 guidance for fiscal year 2007 also outlines possible alternative approaches for testing grants. One possible alternative is the complete documentation of the component’s grant management process and the testing of internal controls. According to DHS, this approach helps the component identify specific weaknesses within the grant process, rather than sampling payments at random to determine potential errors. A second alternative is to perform a risk assessment on the program’s grant portfolio. This alternative helps the program identify specific grants that may be more susceptible to improper payments. The identified grants would then be subject to improper payment sampling. If a component wishes to consider alternative approaches to grant sampling, an explanatory memorandum must be submitted to the ICOFR PMO for review and approval. If approved by the ICOFR PMO, DHS will submit the alternative approach request to OMB for review and approval. Also, OMB has reported that the Chief Financial Officers (CFO) Council continues to play a critical role in efforts to address and reduce improper payments through its Improper Payments Transformation Team. This group has been collaborating with nongovernmental entities to consolidate governmentwide best practices; enumerate legislative and regulatory barriers that hinder program integrity efforts; and develop forums where federal and state stakeholders from the program, audit, and financial communities work together to solve program integrity challenges. These activities could provide guidance to help DHS determine how to best test its grant programs. DHS also plans to hold workshops for its components on statistical sample testing and reporting to ensure that they have a consistent understanding of what is expected with regard to IPIA testing and reporting. Although DHS does not expect to be in compliance with IPIA in fiscal year 2007, completing a thorough risk assessment process is an important first step. In addition to developing the corrective action plans described, DHS has a broader initiative to resolve material internal control weaknesses and build management assurances across the department. During fiscal year 2007, DHS established the ICOFR PMO as a new office within the DHS OCFO. The ICOFR PMO is responsible for departmentwide implementation of OMB Circular No. A-123. In March 2007, DHS issued the ICOFR Playbook, which outlines the department’s strategy and processes to resolve material weaknesses and build management assurances and incorporates the departmentwide corrective action plans, which contain more detailed information. The ICOFR PMO is responsible for the ICOFR Playbook and, according to DHS, the agency will update the ICOFR Playbook each year, establishing milestones and focus areas that will be tracked during the year. One section of the ICOFR Playbook relates to IPIA testing, and it discusses the actions taken by DHS in fiscal year 2006 to meet IPIA requirements. This section also states that DHS will develop policies and procedures to integrate the requirements of OMB’s implementing guidance for IPIA into annual component management assurances of compliance with significant laws and regulations, as part of DHS management’s assertion on internal controls over financial reporting and in an effort to strengthen internal controls to support DHS’s mission. In addition to management providing an assertion on internal controls over financial reporting, DHS is required to obtain a related auditor’s opinion. Incorporating IPIA into this guidance will increase the likelihood of successful implementation and could also strengthen related internal controls. The ICOFR Playbook draws attention to the process of addressing IPIA requirements across the department. By successfully addressing the requirements of IPIA, DHS will be in a better position to take steps to reduce improper payments, as the ultimate goal of IPIA reporting is to improve the integrity of payments that the agency makes. Further, DHS has testified that to ensure the long-term effectiveness of the department’s efforts to reduce improper payments, DHS requested resources in its fiscal year 2008 budget to hire additional staff so that it can enhance risk assessment procedures and conduct oversight and review of component test plans. In addition to its overall corrective action plan to comply with IPIA, DHS, as required by IPIA and related OMB implementing guidance, has developed plans to reduce improper payments related to the two high-risk programs it identified in its fiscal year 2006 testing—FEMA’s IHP assistance payments and disaster-related vendor payment programs. These plans highlighted improving internal controls to prevent improper payments in each of these programs. FEMA’s testing of its two high-risk disaster-related programs identified several key internal control weaknesses, including ineffective system controls to review data for potential duplications and inconsistently applied standards for supporting evidence and documentation. To address these findings, FEMA initiated corrective action plans aimed at reducing improper payments by strengthening internal controls. These plans included validating Social Security numbers during telephone registration, increasing IT systems capabilities to handle high volume during a catastrophic disaster, and enhancing post-payment reviews. Our prior reporting also identified significant internal control deficiencies in the IHP program. To address OMB’s reporting requirements on actions for reducing improper payments, DHS included in its fiscal year 2006 PAR corrective action plans for IHP assistance payments and disaster-related vendor payments. For each of the two high-risk programs, DHS prepared a schedule of corrective action plans with target completion dates. For the IHP program, DHS included corrective action plans that were already completed in addition to those in process and planned. DHS has also established critical milestones for reducing improper, disaster-related vendor payments. During fiscal year 2007, DHS updated and tracked its corrective action plan critical milestones. Details of these corrective action plan critical milestones can be found in appendix V. Based on DHS’s updated corrective action plan report for IHP, as of May 14, 2007, DHS had not completed certain critical milestones by the identified target date. These milestones included system interface improvements and certain contract awards. Missing these established critical milestones delays strengthening internal controls that are necessary to reduce future improper payments, and therefore it is important that DHS stays on track in implementing its corrective action plans. DHS also noted that human capital is the principal requirement to execute these two corrective action plans; however, according to DHS, exact requirements are not estimable at this time. With regard to system improvements, as we have previously recommended, DHS needs to conduct effective testing to provide reasonable assurance that the system will function in a disaster recovery environment. For the last 3 years, DHS has contracted with a recovery auditing firm to perform recovery audit work to comply with the Recovery Auditing Act; however, activities in this area could be improved. Specifically, DHS encountered problems that kept it from reporting on recovery audit efforts during fiscal year 2006. DHS was not able to report recovery audit results in fiscal year 2006 for three of the four components it identified as meeting the criteria for recovery auditing as specified in the Recovery Auditing Act (i.e., over $500 million in contractor payments) due to problems obtaining disbursement data and delays in obtaining security clearances for contract personnel. In addition, DHS did not perform recovery auditing efforts at the fourth component identified as meeting the criteria. Further, DHS has not yet reported on its efforts to recover improper payments identified during its testing of FEMA’s disaster-related vendor payments and has reported limited information on its efforts to recover identified improper IHP assistance payments. In March 2007, DHS revised its internal guidance for recovery auditing for fiscal year 2007 to discuss the issues encountered in previous years and to emphasize timelines to help ensure that all applicable components are able to report. This guidance clarifies what is expected of applicable components, but ongoing oversight within the OCFO will be necessary to ensure that components are progressing with their recovery auditing efforts and will be able to successfully report on the results of these efforts at year end. In addition, DHS’s updated guidance does not require components to report on efforts to recover improper payments identified during IPIA testing. Reporting this information in the annual PAR would provide a more complete picture of the agency’s actions to recover payments that it has identified as being improper. As an executive branch agency, DHS is required to perform recovery audits under certain conditions as specified by the Recovery Auditing Act. Beginning with fiscal year 2004, OMB required that applicable agencies publicly report on their recovery auditing efforts as part of their PAR reporting of improper payment information. Agencies are required to discuss any contract types excluded from review and justification for doing so. Agencies are also required to report, in table format, various amounts related to contracts subject to review and actually reviewed, contract amounts identified for recovery and actually recovered, and prior-year amounts. DHS took steps to identify and recover improperly disbursed funds by hiring an independent contractor who conducted recovery audit work at two major components, ICE and CBP. DHS began recovery auditing efforts during fiscal year 2004 but was not able to report on these efforts for that year because initial findings were not available in time to be included in the annual PAR. This recovery audit work continued during fiscal year 2005 and covered all fiscal year 2004 disbursements to contractors from these two components, ultimately identifying more than $2.1 million of improper payments and recovering more than $1.2 million, as reported in DHS’s fiscal year 2005 PAR. While DHS was able to recover about 55 percent of improper payments identified through its recovery audit efforts, based on our review of other agencies, we have previously questioned whether agency amounts identified for recovery should have been much higher, which would thereby significantly decrease the agency- specific and overall high rate of recovery. According to DHS’s fiscal year 2006 PAR reporting, recovery audit contract work over fiscal year 2005 disbursements began in fiscal year 2005 at CBP and ICE, and DHS extended its recovery audit work to include USCG in fiscal year 2006. Delays in obtaining security clearances for contract personnel severely hampered completion of recovery audit work at CBP and ICE. Delays in supplying needed disbursement information hindered recovery audit work at USCG. As a result, DHS was not able to provide conclusive recovery audit summary results for fiscal year 2006 PAR reporting. According to DHS, four of its components—ICE, CBP, USCG, and FEMA—meet the criteria for recovery auditing as specified in the Recovery Auditing Act (i.e., each has over $500 million in contractor payments). ICE, CBP, and USCG entered into the same recovery audit contract. FEMA’s recovery audit work in fiscal year 2006 was part of a pilot study on internal controls over improper payments for IHP assistance and disaster-related vendor payments. In the aftermath of Hurricane Katrina, DHS and FEMA, with the assistance of a contractor, conducted an internal controls assessment related to improper IHP assistance and disaster-related vendor payments. Although this assessment identified improper payments, DHS has not yet reported on its efforts to recover improper payments identified during its testing of FEMA’s disaster-related vendor payments and has reported limited information, such as the dollar amount of improper payments approved for recovery and the amount returned to FEMA, related to its efforts to recover improper IHP payments. Of the 3 years agencies have been required to report on recovery audits in table format, DHS was only able to report required recovery audit data in its fiscal year 2005 PAR. Table 4 presents DHS’s recovery audit efforts and results for fiscal years 2004 through 2006. DHS has recently revised and clarified its internal guidance related to recovery auditing for fiscal year 2007 to discuss prior issues and emphasize timelines to help ensure that all applicable components are able to complete recovery audits and report on their efforts. The new guidance requires that applicable DHS components provide the ICOFR PMO with a general description and evaluation of the steps taken to carry out a recovery auditing program. Components are required to include a discussion of any security clearance requirements and show that there is sufficient time to allow contractors to complete audit recovery work in time to meet PAR reporting deadlines. Every update should include the total amount of contracts subject to review, the actual amount of contracts reviewed, the amount identified for recovery, and the amounts actually recovered in the current year. The year-end update should include a corrective action plan to address the root causes of payment errors. A general description and evaluation of any management improvements to address flaws in a component’s internal controls over contractor payments discovered during the course of implementing a recovery audit program, or other control activities over contractor payments, is also required. This guidance applies to the four DHS components—CBP, FEMA, ICE, and USCG—that meet Recovery Auditing Act criteria. In addition, according to DHS, the ICOFR PMO may expand recovery audit contracting to other components as the benefits of this work become clearer. Although DHS’s guidance clarifies what is expected of components, ongoing oversight within the OCFO will be necessary to ensure that the components are progressing with their recovery auditing efforts and will be able to successfully report on results at year end. In addition to specific recovery audit work to identify improper payments made to contractors, DHS also identifies improper payments through its IPIA testing. For example, as discussed previously, DHS’s testing in fiscal year 2006 of its two high-risk programs identified improper IHP assistance payments and disaster-related vendor payments made by FEMA. However, DHS’s internal guidance does not require components to include information in its annual PAR related to its efforts to recover improper payments identified during IPIA testing and, as a result, DHS has not yet reported on its efforts to recover improper disaster-related vendor payments identified and has reported limited information on its efforts to recover identified improper IHP assistance payments. Having components report this information in the annual PAR would provide a more complete picture of the agency’s actions to recover payments that it has identified as being improper. Although DHS has made some progress in implementing the requirements of IPIA, challenges remain in ensuring that all DHS programs and activities, including grant programs, have been reviewed to determine their susceptibility to significant improper payments and tested, if applicable. As DHS continues to improve its IPIA efforts and identify and test its high-risk programs, the agency should be better able to identify, and ultimately strengthen controls, to reduce improper payments. While preventive internal controls should be maintained as the agency’s front-line defense against making improper payments, recovery auditing holds promise as a cost-effective means of identifying contractor overpayments. In addition, reporting on efforts to recover any other specific improper payments identified would provide a more complete picture of the agency’s actions to recover payments that it has identified as being improper. With the ongoing imbalance between revenues and outlays across the federal government, and the Congress’s and the American public’s increasing demands for accountability over taxpayer funds, identifying, reducing, and recovering improper payments become even more critical. To help improve its efforts to implement IPIA and recover improper payments, we recommend that the Secretary of Homeland Security direct the Chief Financial Officer to take the following actions. (1) Maintain oversight and control over critical milestones identified in the DHS corrective action plan for IPIA compliance so that DHS components stay on track, specifically in regard to identifying programs and performing risk assessments and any related testing. (2) Require all applicable components to determine and document how they plan to assess their grant programs to determine whether they are at high risk for issuing significant improper payments, and, if necessary, test these grant programs. (3) Provide oversight and monitor the progress of all applicable DHS components to successfully perform and report on their recovery auditing efforts. (4) Similar to the required reporting on efforts to recover improper payments made to contractors under the Recovery Auditing Act, develop procedures for reporting in its annual PAR on the results of yearly efforts to recover any other known improper payments identified under IPIA, by the DHS OIG, or other external auditors. We requested comments on a draft of this report from the Secretary of Homeland Security. These comments are reprinted in appendix II. DHS concurred with the recommendations in our report. DHS noted that significant actions under way include strengthening the department’s financial management and oversight functions to improve the DHS control environment and implementing risk assessments to build a foundation for a sustainable IPIA program. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies of this report to the Secretary of Homeland Security and other interested parties. Copies will also be made available to others upon request. In addition, this report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-9095 or at williamsm1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix VI. To determine to what extent the Department of Homeland Security (DHS) has implemented the requirements of the Improper Payments Information Act of 2002 (IPIA), we compared the IPIA legislation, and the related Office of Management and Budget (OMB) implementing guidance, with DHS improper payment risk assessment methodologies, and IPIA Performance and Accountability Report (PAR) information for fiscal years 2004 through 2006. To analyze DHS risk assessment compliance with IPIA, we obtained and reviewed documents regarding its regulations and methodology for identifying programs and activities highly susceptible to improper payments. We reviewed DHS’s PARs, Office of Inspector General (OIG) semiannual reports to the Congress, and GAO reports for fiscal years 2004 through 2006 for improper payment information. We also reviewed procedures performed by DHS’s independent financial statement auditor related to DHS’s compliance with IPIA. We reviewed the programs that DHS identified as its IPIA population and analyzed the risk assessments that were performed during fiscal year 2006. This allowed us to determine which components did not perform a risk assessment and which programs were not covered. During our review, we noted that the Office of Grants and Training (GT), a DHS component, did not perform an assessment or complete payment statistical sample testing on its grants programs for fiscal year 2006 as required of all DHS programs issuing more than $100 million of IPIA relevant payments in fiscal year 2005. To analyze improper payments related to DHS grantees and highlight the importance of performing IPIA testing in this area, we obtained and reviewed fiscal year 2005 single audit reports of these entities. We used fiscal year 2005 reports because that is the most recent year for which complete audit results have been posted to the Federal Audit Clearinghouse (FAC). We also reviewed GAO reports and DHS OIG Financial Assistance (Grants) Reports for fiscal year 2005 through fiscal year 2007 to identify weaknesses reported at DHS grantees. In addition, we reviewed DHS OIG Management Reports (audits and inspections) for fiscal year 2005 through fiscal year 2007 that were related to grants and DHS OIG semiannual reports to the Congress for fiscal years 2005 and 2006 to identify questioned costs related to DHS grantees. To identify what actions DHS has under way to improve IPIA compliance and reporting, we interviewed DHS staff in the Office of the Chief Financial Officer and reviewed DHS corrective action plans and the Internal Controls Over Financial Reporting (ICOFR) Playbook. We also reviewed DHS’s IPIA implementing guidance for fiscal year 2007—revised in March 2007 and May 2007—and determined whether it was consistent with IPIA requirements. We discussed these revisions with improper payment and financial management officials from DHS to inquire about what is currently being implemented and what will be implemented in the future to ensure compliance with DHS’s revised internal guidance. To determine what efforts DHS has in place to recover improper payments, we compared section 831 of the National Defense Authorization Act for Fiscal Year 2002, commonly known as the Recovery Auditing Act, and the related OMB implementing guidance, with DHS recovery auditing procedures and PAR-reported information for fiscal year 2006. We also reviewed DHS PARs, OIG semiannual reports to the Congress, and GAO reports for fiscal years 2004 through 2006 for recovery audit information. To assess the reliability of data reported in DHS’s PARs related to improper payments and recovery audit efforts, we (1) reviewed existing information about the data and the system that produced them and (2) interviewed agency officials knowledgeable about the data. Based on these assessments, we determined that the data were sufficiently reliable for the purposes of this report. We conducted our work from October 2006 through June 2007 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Homeland Security or his designee. The Director, Departmental GAO/OIG Liaison Office, provided written comments, which are presented in the Agency Comments and Our Evaluation section of this report and are reprinted in appendix II. Table 5 presents information on prior-year IPIA reporting by DHS, including compliance issues reported by the independent auditor. Table 6 provides a list of DHS grant programs, primary recipients, and award information for fiscal year 2006. Table 7 describes the details of the open corrective action plan critical milestones as of May 14, 2007, as reported by DHS, for reducing improper IHP assistance payments. Based on DHS’s updated corrective action plan report for IHP, as of May 14, 2007, DHS had not completed certain critical milestones by the identified target date. These milestones included system interface improvements and certain contract awards. Missing these established critical milestones only delays strengthening internal controls that are necessary to reduce future improper payments. It is important that DHS stays on track in implementing its corrective action plans. DHS has also established critical milestones for reducing improper disaster-related vendor payments. Table 8 describes the details of the open corrective action plan critical milestones as of May 14, 2007, as reported by DHS for reducing improper disaster-related vendor payments. DHS identified three primary root causes for why these two programs— IHP assistance payments and disaster-related vendor payments—are at high risk of issuing improper payments. According to DHS, these root causes include the following. People—FEMA employees were not properly trained. Processes—The nature of FEMA’s work responding to disasters explains the reliance on people that are not trained in finance requirements and are dispersed throughout areas with limited infrastructure. Policies—Policies were cited as possibly inadequate for instructing employees on the proper supporting documentation. There is a need for clear policy and procedural guidelines that sets standard operating procedures for all FEMA employees, especially those outside the finance area. DHS also noted that human capital is the principal requirement to execute these two corrective action plans; however, according to DHS, exact requirements are not estimable at this time. These plans, if properly executed, should help reduce future improper payments in these programs by strengthening internal controls. With regard to system improvements, as we have previously recommended, DHS needs to conduct effective testing to provide reasonable assurance that the system will function in a disaster recovery environment. In addition to the contact named above, the following individuals also made significant contributions to this report: Casey Keplinger, Assistant Director; Verginie Amirkhanian; Sharon Byrd; Francine DelVecchio; Francis Dymond; Gabrielle Fagan; Jacquelyn Hamilton; and Laura Stoddard. | The federal government is accountable for how its agencies and grantees spend more than $2 trillion of taxpayer dollars and is responsible for safeguarding those funds against improper payments as well as for recouping those funds when improper payments occur. The Congress enacted the Improper Payments Information Act of 2002 (IPIA) and the Recovery Auditing Act to address these issues. Fiscal year 2006 marked the third year that agencies were required to report improper payment and recovery audit information in their Performance and Accountability Reports. The Department of Homeland Security (DHS) reported limited information during these 3 years. GAO was asked to (1) determine the extent to which DHS has implemented the requirements of IPIA, (2) identify actions DHS has under way to improve IPIA compliance and reporting, and (3) determine what efforts DHS has in place to recover improper payments. To accomplish this, GAO analyzed DHS's internal guidance and action plans, and reviewed information reported in its Performance and Accountability Reports. DHS has made some progress in implementing IPIA requirements, but much more work remains for the agency to become compliant with IPIA. For example, while DHS has made progress in identifying its programs, for fiscal year 2006, the agency did not perform the required first step--a risk assessment--on approximately $13 billion of its more than $29 billion in disbursements subject to IPIA. Until DHS fully assesses its programs, the potential magnitude of improper payments is unknown. For the remaining $16 billion, DHS determined that two programs-- Individuals and Households Program (IHP) assistance payments and disaster-related vendor payments--were at high risk for issuing improper payments and reported related estimates. For the $13 billion for which no risk assessment was performed, DHS has encountered challenges with IPIA implementation. Of this amount, over $6 billion relates to payments for grant programs. Developing a plan to assess risk and potentially test grant payments is important given that the DHS Office of Inspector General, GAO, and other auditors have identified weaknesses in grant programs. This will allow DHS to gain a better understanding of its risk for improper payments and potentially reduce future improper payments. DHS has actions under way to improve IPIA reporting and compliance, but does not plan to be fully compliant in fiscal year 2007. DHS has prepared a plan to address its noncompliance with IPIA, which included updating its guidance to focus on program identification and risk assessments to build a foundation for a sustainable IPIA program. In addition, DHS has developed plans to reduce improper payments related to its two identified high-risk programs. However, until DHS fully completes the required risk assessments for all of its programs and then estimates for risk-susceptible programs, it is not known whether other programs have significant improper payments that also need to be addressed. In addition, DHS's efforts to recover improper payments could be improved. According to DHS, four of its components meet the criteria for recovery auditing as specified in the Recovery Auditing Act. These four components make at least $4 billion of contractor payments each fiscal year. DHS encountered problems that kept it from reporting on recovery audit efforts during fiscal year 2006 for three of the four components, and did not perform recovery auditing at the fourth component. In March 2007, DHS revised its guidance to clarify what is expected; however, ongoing oversight will be necessary to monitor the components' progress. In addition, DHS has reported limited information on its efforts to recover specific improper payments identified during its testing of high-risk programs. Although DHS is not currently required to do so, reporting this information would provide a more complete picture of the agency's actions to recover payments that it has identified as being improper. |
According to OMB, its predominant mission is to assist the President in overseeing the preparation of the federal budget and to supervise budget administration in executive branch agencies. In helping to formulate the President’s spending plans, OMB is responsible for evaluating the effectiveness of agency programs, policies, and procedures; assessing competing funding demands among agencies; and setting funding priorities. OMB also is to ensure that agency reports, rules, testimony, and proposed legislation are consistent with the President’s budget and with administration policies. In addition, OMB is responsible for overseeing and coordinating the administration’s procurement, financial management, information, and regulatory policies. In each of these areas, OMB’s role is to help improve administrative management, to develop better performance measures and coordinating mechanisms, and to reduce unnecessary burden on the public. To drive improvement in the implementation and management of IT projects, the Congress enacted the Clinger-Cohen Act in 1996 to further expand the responsibilities of OMB and the agencies under the Paperwork Reduction Act. The act requires that agencies engage in capital planning and performance- and results-based management. OMB is required by the Clinger-Cohen Act to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by executive agencies. OMB is also required to report to the Congress on the net program performance benefits achieved as a result of major capital investments in information systems that are made by executive agencies. In response to the Clinger-Cohen Act and other statutes, OMB developed section 300 of Circular A-11. This section provides policy for planning, budgeting, acquisition, and management of federal capital assets and instructs agencies on budget justification and reporting requirements for major IT investments. Section 300 defines the budget exhibit 300, also called the Capital Asset Plan and Business Case, as a document that agencies submit to OMB to justify resource requests for major IT investments. The exhibit 300 consists of two parts: the first is required of all assets; the second applies only to information technology. Among other things, the exhibit 300 requires agencies to provide information summarizing spending and funding plans; performance goals and measures; project management plans, goals, and progress; and security plans and progress. This reporting mechanism, as part of the budget formulation and review process, is intended to enable an agency to demonstrate to its own management, as well as OMB, that it has employed the disciplines of good project management, developed a strong business case for the investment, and met other Administration priorities in defining the cost, schedule, and performance goals proposed for the investment. The types of information included in the exhibit 300, among other things, are to help OMB and the agencies identify and correct poorly planned or performing investments (i.e., investments that are behind schedule, over budget, or not delivering expected results) and real or potential systemic weaknesses in federal information resource management (e.g., project manager qualifications). According to OMB’s description of its processes, agencies’ exhibit 300 business cases are reviewed by OMB analysts from its four statutory offices—Offices of E-Government and Information Technology (e-Gov), Information and Regulatory Affairs (OIRA), Federal Financial Management, and Federal Procurement Policy—and its Resource Management Offices (RMO). In addition to other responsibilities under various statutes, e-Gov and OIRA develop and oversee the implementation of governmentwide policies in the areas of IT, information policy, privacy, and statistical policy. OIRA and e-Gov analysts also carry out economic and related analyses, including reviewing exhibit 300s. Each of about 12 analysts is responsible for overseeing IT projects for a specific agency or (more commonly) several agencies. OMB’s RMOs are staffed with program examiners, whose responsibility is to develop and support the President’s Budget and Management Agenda. RMOs work as liaisons between federal agencies and the presidency. In formulating the budget, they evaluate agency requests for funding and evaluate agency management and financial practices. RMOs also evaluate and make recommendations to the President when agencies seek new legislation or the issuance of Presidential executive orders that would help agencies to fulfill their organizational objectives. According to OMB officials, the OIRA and e-Gov analysts, along with RMO program examiners, evaluate agency exhibit 300 business cases as part of the development of the President’s Budget. The results of this review are provided to agencies through what is called the “passback” process. That is, OMB passes the requests back to agencies with its evaluation, which identifies any areas requiring remediation. The final step in the budget process, occurring after the Congress has appropriated funds, is apportionment, through which OMB formally controls agency spending. According to the Antideficiency Act, before the agency may spend its funding resources, appropriations must be apportioned by periods within the fiscal year (typically by quarters) or among the projects to be undertaken. Although apportionment is a procedure required to allow agencies to access their appropriated funds, OMB can also use apportionment to impose conditions on agency spending, such as changes in agency practices; it is one of several mechanisms that the Clinger-Cohen Act authorizes OMB to use to enforce an agency head’s accountability for the agency’s IT investments. The President’s Budget for Fiscal Year 2005 included about 1,200 IT projects, totaling about $60 billion. Of this total number of projects, OMB reported in the budget that slightly over half—621 projects, representing about $22 billion—were on a Management Watch List. According to OMB’s March 2004 testimony, this list consists of mission-critical projects that needed to improve performance measures, project management, IT security, or overall justification. OMB officials described this assessment as based on evaluations of exhibit 300s submitted to justify inclusion in the budget. According to OMB’s testimony, the fiscal year 2005 budget required agencies to successfully correct identified project weaknesses and business case deficiencies; otherwise, they risked OMB placing limits on their spending. OMB officials testified in March 2004 that they would enforce these corrective actions through the apportionment process. OMB continued its use of a Management Watch List in the recently released President’s Budget for Fiscal Year 2006. The President’s Budget for Fiscal Year 2006 includes 1,087 IT projects, totaling about $65 billion. Of this total number of projects, OMB reported in the budget that 342 projects, representing about $15 billion, are on the fiscal year 2006 Management Watch List. Our objectives were to describe and assess OMB’s processes for (1) placing projects on its Management Watch List and (2) following up on corrective actions established for projects on the list. To examine OMB’s processes for developing the list, we requested a copy of the Management Watch List; we reviewed related OMB policy guidance, including its Circular A-11 and Capital Programming Guide, as well as the Analytical Perspectives for the President’s Budget submissions for fiscal years 2005 and 2006; and we interviewed OMB analysts and their managers, including the Deputy Administrator of OIRA and the Chief of the Information Technology and Policy Branch, to identify the processes and criteria they have in place to determine which IT projects to include on the Management Watch List. To examine OMB’s follow-up procedures on corrective actions established for IT projects on the list, we reviewed related policy guidance, including section 300 of Circular A-11 and OMB’s Capital Programming Guide. We analyzed OMB’s apportionment documentation, specifically the Standard Form 132 (Apportionment and Reapportionment Schedule), which documented special apportionments that specified conditions that had to be met before the agencies could receive funds. In addition, we interviewed OMB officials and analysts and reviewed testimony and laws affecting the management of IT investments, such as the Clinger-Cohen Act. We conducted our work at OMB headquarters in Washington, D.C., from August 2004 through March 2005, in accordance with generally accepted government auditing standards. According to OMB officials, including the Deputy Administrator of OIRA and the Chief of the Information Technology and Policy Branch, OMB staff identified projects for the Management Watch List through their evaluation of the exhibit 300s that agencies submit for major IT projects as part of the budget development process. This evaluation is carried out as part of OMB’s responsibility for helping to ensure that investments of public resources are justified and that public resources are wisely invested. The OMB officials added that their analysts evaluate agency exhibit 300s by assigning scores to each exhibit 300 based on guidance presented in OMB Circular A-11. According to this circular, the purpose of the scoring is to ensure that agency planning and management of capital assets are consistent with OMB policy and guidance. As described in Circular A-11, the scoring of a business case consists of individual scoring for 10 categories, as well as a total composite score of all the categories. The 10 categories are project (investment) management, enterprise architecture, performance-based management system (including the earned value life-cycle costs formulation, and support of the President’s Management Agenda. According to Circular A-11, scores range from 1 to 5, with 5 indicating investments whose business cases provided the best justification and 1 the least. For investments with average scores of 3 or below, OMB may ask agencies for remediation plans to address weaknesses in their business cases. OMB officials said that, for fiscal year 2005, an IT project was placed on the Management Watch List if its exhibit 300 business case received a total composite score of 3 or less, or if it received a score of 3 or less in the areas of performance goals, performance-based management systems, or security and privacy, even if its overall score was a 4 or 5. OMB reported that agencies with weaknesses in these three areas were to submit remediation plans addressing the weaknesses. According to OMB management, individual analysts were responsible for evaluating projects and determining which projects met the criteria to be on the Management Watch List for their assigned agencies. To derive the total number of projects on the list that were reported for fiscal year 2005, OMB polled the individual analysts and compiled the numbers. OMB officials said that they did not aggregate these projects into a single list describing projects and their weaknesses. According to these officials, they did not construct a single list of projects meeting their watch list criteria because they did not see such an activity as necessary in performing OMB’s predominant mission: to assist in overseeing the preparation of the federal budget and to supervise agency budget administration. Further, OMB officials stated that the limited number of analysts involved enabled them to explore governmentwide issues using ad hoc queries and to develop approaches to address systemic problems without the use of an aggregate list. They pointed at successes in improving IT management, such as better compliance with security requirements, as examples of the effectiveness of their current approach. Nevertheless, OMB has not fully exploited the opportunity to use its Management Watch List as a tool for analyzing IT investments on a governmentwide basis. According to the Clinger-Cohen Act, OMB is required to establish processes to analyze, track, and evaluate the risks and results of major IT capital investments by executive agencies, which aggregation of the Management Watch List would facilitate. Without aggregation, the list’s visibility was limited at more senior levels of OMB, constraining its ability to conduct analysis of IT investments on a governmentwide basis and limiting its ability to identify and report on the full set of IT investments requiring corrective actions. OMB did not develop a structured, consistent process or criteria for deciding how to follow up on corrective actions that it asked agencies to take to address weaknesses associated with projects on the Management Watch List. Instead, OMB officials, including the Deputy Administrator of OIRA and the Chief of the Information Technology and Policy Branch, said that the decision on whether and how to follow up on a specific project was typically made jointly between the OIRA analyst and the RMO program examiner who had responsibility for the individual agency, and that follow- up on specific projects was driven by a number of factors, only one of which was inclusion on the Management Watch List. These officials also said that the decision for follow-up was generally driven by OMB’s predominant mission to assist in budget preparation and to supervise budget administration, rather than strictly by the perceived risk of individual projects. According to these officials, those Management Watch List projects that did receive specific follow-up attention received feedback through the passback process, through targeted evaluation of remediation plans designed to address weaknesses, and through the apportioning of funds so that the use of budgeted dollars was conditional on appropriate remediation plans being in place. These officials also said that follow-up of some Management Watch List projects was done through quarterly e-Gov Scorecards. OMB officials also stated that those Management Watch List projects that did receive follow-up attention were not tracked centrally, but only by the individual OMB analysts with responsibility for the specific agencies. For example, if an agency corrected a deficiency or weakness in a specific area of the exhibit 300 for a Management Watch List project, that change was not recorded centrally. Accordingly, OMB could not readily tell us which of the 621 watch list projects for fiscal year 2005 were followed up on, nor could it use the list to describe the relationship between its follow-up activities and the changes in the numbers of projects on the watch list between fiscal year 2005 (621 projects) and fiscal year 2006 (342). Further, because OMB did not trace follow-up centrally, senior management could not report which projects received follow-up attention and which did not. OMB does not have specific criteria for prioritizing follow-up on Management Watch List projects. Without specific criteria, OMB staff may be agreeing to commit resources to follow up on projects that did not represent OMB’s top priorities from a governmentwide perspective. For example, inconsistent attention to OMB priorities, such as earned value management, could undermine the objectives that OMB set in these areas. In addition, major projects with significant management deficiencies may have continued to absorb critical agency resources. In order for OMB management to have assurance that IT program deficiencies are addressed, it is critical that corrective actions associated with Management Watch List projects be monitored. Follow-up activities are instrumental in ensuring that agencies address and resolve weaknesses found in exhibit 300s, which may indicate underlying weaknesses in project planning or management. Tracking these follow-up activities is essential to enabling OMB to determine progress on both specific projects and governmentwide trends. In addition, tracking is necessary for OMB to fully execute its responsibilities under the Clinger-Cohen Act, which requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments made by executive agencies for information systems. Without tracking specific follow-up activities, OMB could not know whether the risks that it identified through its Management Watch List were being managed effectively; if they were not, funds were potentially being spent on poorly planned and managed projects. By scoring agency IT budget submissions and identifying weaknesses that may indicate investments at risk, OMB is identifying opportunities to strengthen investments. This scoring addresses many critical IT management areas and promotes the improvement of IT investments. However, OMB has not developed a single, aggregate list identifying the projects and their weaknesses, nor has it developed a structured, consistent process for deciding how to follow up on corrective actions. Aggregating the results at a governmentwide level would help OMB take full advantage of the effort that it puts into reviewing business cases for hundreds of IT projects. A governmentwide perspective could enable OMB to use its scoring process more effectively to identify management issues that transcend individual agencies, to prioritize follow-up actions, and to ensure that high-priority deficiencies are addressed. OMB’s follow-up on poorly planned and managed IT projects has been largely driven by its focus on the imperatives of the overall budget process. Although this approach is consistent with OMB’s predominant mission, it does not fully exploit the insights developed through the scoring process, and it may leave unattended weak projects consuming significant budget dollars. The Management Watch List described in the President’s Budget for Fiscal Year 2005 contained projects representing over $20 billion in budgetary resources that could have remained at risk because of inadequate planning and project management. Because of the absence of a consistent and integrated approach to follow-up and tracking, OMB was unable to use the Management Watch List to ascertain whether progress was made in addressing governmentwide and project-specific weaknesses and where resources should be applied to encourage additional progress. Thus, there is an increased risk that remedial actions were incomplete and that billions of dollars were invested in IT projects with planning and management deficiencies. In addition, OMB’s ability to report to the Congress on progress made in addressing critical issues and areas needing continued attention is limited by the absence of a consolidated list and coordinated follow-up activities. In order for OMB to take advantage of the potential benefits of using the Management Watch List as a tool for analyzing and following up on IT investments on a governmentwide basis, we are recommending that the Director of OMB take the following four actions: Develop a central list of projects and their deficiencies. Use the list as the basis for selecting projects for follow-up and for to guide follow-up, develop specific criteria for prioritizing the IT projects included on the list, taking into consideration such factors as the relative potential financial and program benefits of these IT projects, as well as potential risks. Analyze the prioritized list to develop governmentwide and agency assessments of the progress and risks of IT investments, identifying opportunities for continued improvement. Report to the Congress on progress made in addressing risks of major IT investments and management areas needing attention. In written comments on a draft of this report, OMB’s Administrator of the Office of E-Government and Information Technology expressed appreciation for our review of OMB’s use of its Management Watch List. She noted that the report was narrowly focused on the Management Watch List and the use of exhibit 300s in that context. She added that the report did not address the more broad budget and policy oversight responsibilities that OMB carries out or the other strategic tools available to OMB as it executes those responsibilities. We agree that our review described and assessed OMB’s processes for (1) placing the 621 projects representing about $22 billion on its Management Watch List and (2) following up on corrective actions established for projects on the list. The Administrator commented that OMB’s oversight activities include the quarterly President’s Management Agenda Scorecard assessment. We acknowledge these activities in the report in the context of the e-Gov scorecard, which measures the results of OMB’s evaluation of the agencies’ implementation of e-government criteria in the President’s Management Agenda. We also agree with the Administrator that OMB is not the sole audience of an exhibit 300. As we state in the report, an exhibit 300 justification is intended to enable an agency to demonstrate to its own management, as well as to OMB, that it has employed the disciplines of good project management, developed a strong business case for the investment, and met other Administration priorities in defining the cost, schedule, and performance goals proposed for the investment. The Administrator disagreed with our finding that OMB did not have specific criteria for prioritizing follow-up on exhibit 300s that have been included on the Management Watch List. She explained that OMB establishes priorities on a case-by-case basis within the larger context of OMB’s overall review of agency program and budget performance. However, our review showed that OMB did not develop a structured, consistent process or criteria for deciding how it should follow up on corrective actions that it asked agencies to take to address the weaknesses of the projects on the Management Watch List. Accordingly, we continue to believe that OMB should specifically consider those factors that it had already determined were critical enough that they caused an investment to be included in the Management Watch List. Without consistent attention to those IT management areas already deemed as being of the highest priority by OMB, the office risks focusing on areas of lesser importance. We agree with the Administrator’s separate point that agencies have the responsibility for ensuring that investments on the Management Watch List are successfully brought up to an acceptable level. The follow-up that we describe in our report consists of those activities that would allow OMB to ascertain that the deficient investments have, in fact, been successfully strengthened. We note in the report that the quarterly President’s Management Agenda Scorecard plays a role in this activity (in the report, we refer to the e-Gov Scorecard, which contributes to the Management Agenda Scorecard). Finally, the Administrator disagrees with our assessment that an aggregated governmentwide list is necessary to perform adequate oversight and management, and that OMB does not know whether risks are being addressed. However, our review indicated that OMB was unable to easily determine which of the 621 investments on the Management Watch List remained deficient or how much of the $22 billion cited in the President’s Budget remained at risk. In our assessment we observed that OMB had expended considerable resources in the scoring of all exhibit 300s and the identification of investments requiring corrective action, but that it never committed the additional resources that would be required to aggregate the partial management watch lists held by each individual analyst. Because no complete Management Watch List was formed, OMB lost the opportunity to analyze the full set of deficient investments as a single set of data. This undermined its ability to assess governmentwide trends and issues. In addition, the lack of a complete Management Watch List necessarily inhibited OMB’s ability to track progress overall and to represent the full set of investments requiring corrective action. We continue to believe that these activities could be facilitated by an aggregate Management Watch List. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to other interested congressional committees and to the Director of the Office of Management and Budget. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at www.gao.gov. Should you or your offices have questions on matters discussed in this report, please contact me at (202) 512-9286, or Lester P. Diamond, Assistant Director, at (202) 512-7957. We can also be reached by e-mail at pownerd@gao.gov, or diamondl@gao.gov, respectively. Key contributors to this report were William G. Barrick, Barbara Collier, Sandra Kerr, and Mary Beth McClanahan. | For the President's Budget for Fiscal Year 2005, the Office of Management and Budget (OMB) stated that of the nearly 1,200 major information technology (IT) projects in the budget, it had placed approximately half--621 projects, representing about $22 billion--on a Management Watch List, composed of mission-critical projects with identified weaknesses. GAO was asked to describe and assess OMB's processes for (1) placing projects on its Management Watch List and (2) following up on corrective actions established for projects on the list. For the fiscal year 2005 budget, OMB developed processes and criteria for including IT investments on its Management Watch List. In doing so, it identified opportunities to strengthen investments and promote improvements in IT management. However, it did not develop a single, aggregate list identifying the projects and their weaknesses. Instead, OMB officials told GAO that to identify IT projects with weaknesses, individual OMB analysts used scoring criteria that the office established for evaluating the justifications for funding that federal agencies submit for major projects. These analysts, each of whom is typically responsible for several federal agencies, were then responsible for maintaining information on these projects. To derive the total number of projects on the list that OMB reported for fiscal year 2005, OMB polled its individual analysts and compiled the result. However, OMB officials told GAO that they did not compile a list that identified the specific projects and their identified weaknesses. The officials added that they did not construct a single list because they did not see such an activity as necessary. Thus, OMB has not fully exploited the opportunity to use the list as a tool for analyzing IT investments on a governmentwide basis. OMB had not developed a structured, consistent process for deciding how to follow up on corrective actions that its individual analysts asked agencies to take to address weaknesses associated with projects on its Management Watch List. According to OMB officials, decisions on follow-up and monitoring of progress were typically made by the staff with responsibility for reviewing individual agency budget submissions, depending on the staff's insights into agency operations and objectives. Because it did not consistently require or monitor follow-up activities, OMB did not know whether the project risks that it identified through its Management Watch List were being managed effectively, potentially leaving resources at risk of being committed to poorly planned and managed projects. In addition, because it did not consistently monitor the follow-up performed on projects on the Management Watch List, OMB could not readily tell GAO which of the 621 projects received follow-up attention. Thus, OMB was not using its Management Watch List as a tool in setting priorities for improving IT investments on a governmentwide basis and focusing attention where it was most needed. |
General aviation is characterized by a diverse fleet of aircraft flown for a variety of purposes. In 2010, FAA estimated that there were more than 220,000 aircraft in the active general aviation fleet, comprising more than 90 percent of the U.S. civil aircraft fleet. Included among these aircraft are airplanes, balloons, unmanned aircraft systems, gliders, and helicopters. (See fig. 1.) Airplanes comprise the vast majority—almost 80 percent—of the general aviation fleet. According to a 2009 FAA study, general aviation airplanes have an average age of 40 years. In addition, most are single-engine piston, such as the Beechcraft Bonanza, Cessna 172, and Piper Archer. FAA designates a small, but growing, portion of the general aviation fleet as “experimental.” These include aircraft used for racing and research as well as exhibition aircraft, such as former military aircraft known as warbirds. The largest group of experimental aircraft—and the fastest growing segment of the general aviation fleet, according to FAA—is defined by FAA as “experimental-amateur built” (E-AB). Individuals build E-AB aircraft either from kits sold by manufacturers or from their own designs. E-AB aircraft can contain previously untested systems, including engines not designed for aircraft use, and modifications of airframes, controls, and instrumentation. The E-AB fleet is diverse, ranging from open-framework designs with no cabin structure to small, pressurized airplanes able to fly long distances. The majority are simple craft used primarily for short personal flights. The expertise of the builders varies, as does the experience of the pilots and the availability of training for transitioning to the aircraft. Following a successful inspection of the aircraft and documentation review, FAA issues a special airworthiness certificate in the experimental category to the aircraft’s builder and assigns operating limitations in two phases specifying how and where the aircraft can be flown.the builder determines the aircraft’s airspeed and altitude capabilities and develops a flight manual. Phase II refers to normal operations after the flight testing is completed. Phase I is the required flight test period, in which General aviation aircraft can be used for a wide variety of operations, although about 78 percent of general aviation operations fall under one of four types: personal (e.g., a pilot taking his family on a sightseeing trip); business (e.g., a pilot flying herself to a meeting); corporate (e.g., a professionally-piloted aircraft transporting corporate employees around the globe); and instructional (e.g., a student flying with a certified flight instructor). These operations are conducted from the more than 2,950 public use general aviation airports (which primarily serve general aviation aircraft) as well as from thousands of other airports (including those that support commercial air service) and landing facilities (e.g., heliports). General aviation flights operate under various federal aviation regulations. For purposes of this report, our definition of general aviation includes flights operated under part 91 general operation and flight rules. GAO-12-117. Our definition of flight-instructor based schools includes individual flight instructors. more information about the estimated number of active airplane pilots and selected pilot certificate requirements and limitations. Various offices within FAA are responsible for ensuring general aviation safety, most notably the Flight Standards Service, Aircraft Certification Service, Office of Accident Investigation and Prevention, and Office of Runway Safety. According to FAA, the agency’s fiscal year 2011 budget submission included nearly $203 million for activities within the Aviation Safety organization related to the top priority of reducing the general aviation fatal accident rate. FAA’s responsibilities include administering aircraft and pilot certification, conducting safety oversight of pilot training and general aviation operations, and taking enforcement actions against pilots and others who violate federal aviation regulations and safety standards. FAA also collects general aviation fleet and flight activity data through an annual survey and supports the NTSB by gathering information about general aviation accidents. According to NTSB officials, FAA collects information on the vast majority of general aviation accidents. NTSB is responsible for all aviation accident investigations—using the information gathered by FAA and its own investigators—and for determining the probable cause of accidents. NTSB uses a coding system of aircraft accident categories and associated phases of flight that are useful in describing the characteristics and circumstances of aviation accidents. For ease of interpretation and to categorize similar events, NTSB identifies one event as the “defining event” of the accident, which generally describes the type of accident that occurred—hard landing, midair collision, or fuel exhaustion, for example. In addition, NTSB identifies the causes of an accident and the contributing factors, which describe situations or circumstances central to the accident cause. Just as accidents often include a series of events, the reason those events led to an accident may reflect a combination of multiple causes and contributing factors. For this reason, a single accident report can include multiple cause and contributing factor codes. NTSB also collects descriptive information about the environmental conditions, aircraft, and people involved in aviation accidents. It captures its findings and descriptive information in its Aviation Accident Database. NTSB calculates general aviation accident and fatality rates, which it does using its own accident data and FAA’s annual estimates of general aviation flight activity. NTSB may also recommend regulatory and other changes to FAA and the aviation industry based on the results of its investigations and any studies it conducts. The U.S. general aviation industry includes a number of trade groups, “type clubs,” and other organizations that actively promote the importance of safety and, in many cases, offer educational opportunities to pilots. Many of the groups also work with FAA on advisory and rulemaking committees. Prominent trade organizations include the Aircraft Owners and Pilots Association (AOPA), the Experimental Aircraft Association (EAA), the General Aviation Manufacturers Association (GAMA), and the National Business Aviation Association (NBAA). The Society of Aviation and Flight Educators (SAFE) and the National Association of Flight Instructors represent certified flight instructors and other aviation educators. The American Bonanza Society (ABS), the Cirrus Owners and Pilots Association, and the Lancair Owners and Builders Organization are examples of the several general aviation type clubs. Our analysis of NTSB accident data showed that the annual number of general aviation accidents generally decreased for 1999 through 2011. We also identified several characteristics of accidents with respect to the types of operations and the causes of the accidents. These characteristics were largely consistent with observations made during our last review of general aviation safety in 2001. To better understand these characteristics, where possible, we sought to measure their occurrence in numbers of accidents in relation to their overall occurrence in, for instance, total flight hours or pilot certifications as estimated by FAA. In doing so, we identified some accident characteristics that, based on our analysis, appear to occur disproportionately. However, we also identified methodological and conceptual limitations with the activity data—particularly the General Aviation and Part 135 Activity Survey that FAA uses to estimate annual flight hours and the number of active aircraft—that we discuss later in this section. See table 2 for a summary of the characteristics of general aviation accidents according to our analysis of the NTSB accident data. From 1999 through 2011, nonfatal accidents involving general aviation airplanes generally decreased, falling 29 percent, from 1,265 in 1999 to 902 in 2011. Fatal accidents generally decreased as well, falling 24 percent. Figure 2 indicates the number of fatal and nonfatal accidents for each year we reviewed. During this period of time, though the majority (approximately 56 percent) of all accidents resulted in no injuries, there were more than 200 fatal accidents each year. From 1999 through 2011, personal operations accounted for 73 percent of airplanes in nonfatal general aviation accidents and 77 percent of airplanes in fatal general aviation accidents. (See fig. 3.) This is not a new phenomenon. As we reported about accidents occurring in 1998, personal operations accounted for more than 75 percent of fatal general aviation accidents. From 1999 through 2011, airplanes flying instructional operations were the second most often involved in accidents. However, instructional operations were also the operation with the smallest proportion of fatal accidents. According to our analysis, almost 38 percent of accidents that occurred during instructional flying involved hard landings or loss of control while the aircraft was on the ground. These types of events are less likely to cause fatalities than other types of events. It is also possible that the presence of a certified flight instructor onboard to share the management of the cockpit and other tasks may have contributed to the lower fatality rate for instructional operations. Corporate operations, in which a professional pilot flies an aircraft owned by a business or corporation, was the least common type of operation to be involved in general aviation accidents. Corporate operations accounted for less than 1 percent of fatal general aviation accidents and less than 0.5 percent of nonfatal accidents. From 2008 through 2011, there were no fatal accidents involving corporate airplanes, giving corporate operations an accident record similar to that of commercial air carriers. Again, this is not a new phenomenon. As we reported in 2001, the low number of accidents involving corporate operations is attributable to a number of factors, including the pilot’s training, experience, and participation in ongoing training to maintain and improve their skills, as well as the safety equipment that is typically installed on corporate aircraft. According to a representative of the NBAA, an organization representing companies that rely on general aviation aircraft to conduct business, most corporate operations also benefit from advanced technologies, including avionics that provide synthetic vision and terrain displays; auto-throttle, which helps maintain airspeed; and fuel gauges that are built to the standards required for commercial airliners. Further, airplanes used for corporate purposes are often powered by turbine engines and may be subject to additional safety requirements. Flying for corporate purposes can also differ from other types of flying. Whereas a pilot flying for fun may perform several take-offs and landings and practice maneuvers, a corporate flight likely includes a single take off and landing, with the majority of time spent en route—one of the phases of flight when the fewest fatal accidents occur. Regarding the type of aircraft involved in general aviation accidents, single-engine piston airplanes accounted for almost 76 percent of airplanes in nonfatal general aviation accidents and 60 percent of airplanes in fatal accidents. Single-engine piston airplanes are the most common type of aircraft in the general aviation fleet and, according to stakeholders, the type of aircraft most commonly flown by pilots holding private pilot certifications and flying for personal reasons. According to AOPA, mechanical failures cause relatively few accidents, indicating that the frequency with which single-engine piston airplanes are in accidents is not necessarily a reflection of the safety of the aircraft. E-ABs were the second most common airplane involved in general aviation accidents. From 1999 through 2011, E-AB aircraft accounted for 14 percent of airplanes in nonfatal general aviation accidents and approximately 21 percent in fatal accidents. According to EAA, the organization that represents experimental and amateur-built aircraft owners, E-AB airplanes were also the fastest growing type of aircraft in the general aviation fleet in recent years. In 2011, there were approximately 33,000 registered E-AB aircraft, a 10 percent increase from 3 years earlier. AOPA’s 2010 Nall Report—an annual safety report that provides perspectives on the previous year’s general aviation accidents— indicated that the physical characteristics and the manner in which these aircraft are used expose E-AB aircraft pilots to greater risk and make accidents less survivable. In 2012, NTSB completed a safety study of E-AB aircraft that included the use of an EAA survey of E-AB pilots. Among other findings, NTSB concluded that the flight test period—the first 50 hours of flight—is uniquely challenging for most E-AB pilots because they must learn to manage the handling characteristics of an unfamiliar aircraft while also managing the challenges of the flight test environment, including instrumentation that is not yet calibrated, controls that may need adjustments, and possible malfunctions or adverse handling characteristics. NTSB added that the E-AB safety record could be improved by providing pilots with additional training resources and, accordingly, made several recommendations to FAA and EAA regarding flight training and testing. To better understand the above observations about the airplanes involved in and the types of operations flown during general aviation accidents, we compared the proportions of fatal accidents by airplane category and operation type to their shares of FAA estimated flight hours for 1999 through 2010. For this analysis, we considered 5 airplane categories: (1) non-E-AB, single-engine piston; (2) non-E-AB, multi-engine piston; (3) non-E-AB, turbine engine; (4) E-ABs, regardless of engine type; and (5) others. As designated, there is no overlap in this categorization. If there were no relationship between accidents and airplane category, then we would expect each airplane category to be involved in accidents in proportion to its share of overall flight activity; for example, we would expect an airplane category that comprised 50 percent of general aviation flight hours to also comprise 50 percent of accidents. We found this to be the case with the single-engine piston airplane. Though the single-engine piston airplane is most often involved in fatal general aviation accidents, its share of fatal accidents (60 percent) was slightly less than its share of general aviation flight hours (65 percent). By comparison, E-ABs comprised 21 percent of fatal accidents, but only 4 percent of estimated flight hours. With regard to type of operation, we found that 77 percent of fatal accidents occurred during personal operations, but only 40 percent of the estimated flight hours involved personal operations. (See table 3.) Loss of control in flight—the unintended departure of an aircraft from controlled flight, airspeed, or altitude—was the most common defining event in fatal general aviation accidents. Loss of control can occur because of aircraft malfunction, human performance, and other causes. During the period we examined, 1,036 fatal accidents (31 percent) were categorized as loss of control in flight. This was the most common event in a fatal accident for 3 of the 4 types of general aviation operations— personal, instructional, and business operations—and for all types of airplanes. FAA and the industry recently completed a review of a subgroup of fatal loss of control accidents and will be developing detailed implementation plans for the intervention strategies. According to our analysis of NTSB data, the pilot was a cause in more than 60 percent of the general aviation accidents from 2008 through 2010. The pilot’s actions, decision making, or cockpit management was a cause for 70 percent of the airplanes in fatal accidents and 59 percent in nonfatal accidents. NTSB and other experts view aviation accidents as a sequence of events with multiple causes and contributing factors. Of the 2,801 general aviation accidents that occurred from 2008 through 2010 for which a causal determination was made, 71 percent were determined to have multiple causes. In approximately 34 percent of fatal accidents involving airplanes, the cause was a combination of the pilot’s actions and the failure to properly attain or maintain a performance parameter—e.g., airspeed and altitude. In some instances, there was more than one pilot associated with an airplane. Since we were unable to determine from the data which pilot was in control of the aircraft at the time of the incident, we included data on all pilots involved in the accident in this and subsequent analyses regarding pilot characteristics or experience. percent of all pilots had fewer than 100 hours in any given airplane make and model, then we could expect the results of the above analysis even if pilot flight hours in the airplane make and model had no relation to accidents. We discuss the implications of the lack of this and other data later in this section. To further explore the relationship between pilot flight hours and accidents, we looked at the portion of pilots with fewer than 100 hours in the accident airplane make and model where the pilot was determined to be a cause of the accident and compared it to the portion of pilots with more than 100 hours in the accident airplane make and model. We then did the same using pilot certification levels. Our analysis of accidents from 2008 through 2010 found that private pilots with fewer than 100 hours of experience in the accident airplane make and model were a cause of fatal and nonfatal general aviation accidents at similar rates as pilots with more than 100 hours of experience and with higher pilot certifications. For fatal accidents, 73 percent of pilots with fewer than 100 hours of experience in the accident airplane make and model were a cause as compared to 76 percent of pilots with more than 100 hours of experience. In nonfatal accidents, those portions were 63 and 64 percent, respectively. With regard to pilot certification levels, we found that in nonfatal accidents, private pilots were a cause more often (68 percent) than other types of pilots (percentages ranging from 52 to 58 percent); but in fatal accidents, similar proportions of private and commercial pilots were found to be a cause (75 percent and 80 percent, respectively). Although some experts may believe that lack of experience can contribute to pilot error and accidents, the above suggests that this might not necessarily be the case. However, we do not have enough information to draw any real conclusions because FAA lacks certain key information about pilots that could help identify the root causes of accidents and, thus, risk mitigation opportunities. First, FAA’s estimate of the number of active pilots is an imperfect measure because, according to FAA’s definition, an active pilot is a certificated pilot who holds a valid medical certificate. However, depending on the type of operation the pilot is flying and the pilot’s certification level, age, and health condition, the medical certificate is valid for between 6 and 60 months. The designation as active is also not an indication of whether the pilot has actually flown in the previous year. Second, though pilots report total flight hours as part of their medical certificate application, a pilot’s experience in different makes and models of aircraft—which is not collected—is also relevant as there are risks associated with operating an unfamiliar airplane. As described above, this information would be necessary to draw conclusions about the effect of pilot flight hours on accidents. Third, though pilot flight hours are to be reported as part of the accident report, investigators are not always able to obtain this information for accident pilots as the logbooks in which it is recorded are sometimes destroyed in accidents. Of the 3,257 pilots involved in an accident from 2008 through 2010, pilot flight hours in the accident airplane make and model was missing for 514, or 16 percent of them. Missing data can compromise the validity of analyses that seek to examine the relationship between pilot experience and the causes of general aviation accidents. In addition, FAA does not maintain information about where pilots were trained or whether noncommercial pilots participate in any recurrent training programs other than its WINGS pilot proficiency program— information that would facilitate analyses of the relationship between pilot training and the causes of general aviation accidents and that could help identify shortcomings in current pilot training programs. Private pilots are not required to participate in recurrent training, though they must successfully complete a biennial review of their skills and knowledge by a designated pilot examiner or a certified flight instructor. In recent years, as pilot training has been identified as a contributing factor in high profile accidents, there has been a renewed focus on the sources and amount of pilot training and on altering the training paradigm. FAA has been required to take steps to maintain qualification and performance data on airline pilots, but there has been no decision about whether recurrent training will be included in the database, and no such effort has been undertaken with regard to the remaining pilot population. Without more information about the training of general aviation pilots—and not just those who are in accidents—FAA’s efforts to identify and target risk areas and populations is impeded. FAA estimates of general aviation annual flight hours—a measure key to NTSB’s calculation of general aviation accident and fatality rates and NTSB’s and FAA’s assessments of the safety of general aviation—may not be reliable because of methodological and conceptual limitations with the survey used to gather flight activity data. Since 1978, FAA has used a survey of aircraft owners to estimate annual general aviation flight hours. The survey was redesigned in 1999, and FAA has modified it since then, on its own volition and in response to NTSB recommendations, to improve the survey’s ability to capture activity trends. Changes include sampling 100 percent of certain subpopulations of general aviation aircraft owners who were previously underrepresented in the random sample response—such as owners of turbine engine, rotorcraft, and Alaska-based aircraft—and revising the process for collecting information from owners of multiple aircraft. FAA and NTSB believe these changes have improved the reliability of the survey’s estimates, but some conceptual and methodological limitations persist. First, as with all surveys that rely on self-reported data, there is the risk that respondents will not be able to accurately recall and report information, introducing error and perhaps bias into the survey’s estimates. The general aviation survey, which is usually open from March through August each year, asks respondents to estimate the number of hours flown during the previous calendar year. Depending on funding availability, the survey has opened later or for shorter periods of time. This year, because of contracting-related delays in bringing the survey consultant on board, aircraft owners did not receive the first request for information about 2011 flight hours until August 2012. According to NTSB, accuracy depends on the record-keeping habits and memories of aircraft owners, and in some cases, the aircraft owners’ ability to obtain needed information from pilots who fly their aircraft. Though some portion of aircraft owners may record each flight in their logbooks, to which they can refer to complete the survey, logging each flight is not mandatory. To the extent aircraft owners rely on their recollection of flight hours flown in the previous year, long delays such as the one occurring this year are likely to further degrade the resulting information. Second, the survey has long suffered from low response rates, and this shortcoming, combined with limited information about the population, can call into question any estimates based on the survey’s results. Since the current method for calculating the response rate was implemented in 2004, the overall response rate has ranged from 43 and 47 percent annually through 2010. The primary problem with low response rates is that they can lead to biased estimates if survey respondents and nonrespondents differ with regard to the variables of interest—in this case, annual flight hours. According to guidance from the Office of Management and Budget, agencies should plan to conduct item-level bias analyses if the expected response rate of the survey is below 70 percent and to consider the anticipated response rate in the decision to proceed with the survey. In 2011, the survey contractor completed a nonresponse analysis and concluded that there was no evidence of significant bias. However, relatively little is known about the aircraft owners who do not respond and, as a result, the contractor and we concluded that the sample is not rich enough in information to understand the differences between the two groups. For instance, there may be certain characteristics of owners that are associated with flying habits, such as the owner’s age or certification level. Though a low response rate does not necessarily imply bias, it does raise the possibility for it. Further, the ability to detect any such bias is limited by what is known about those who do not respond. Given these conditions, bias remains a serious concern. An alternative data collection method implemented in 2004 for owners of multiple aircraft may also introduce bias to the survey’s flight-hour estimates. In an effort to improve response rates among owners of multiple aircraft who were less likely to respond because of the burden of multiple forms, the survey administrators developed a modified data collection procedure for these owners. This includes sending out a form and calling these owners to verify receipt of the survey and encouraging participation. Survey staff also collect essential data—including the number of hours flown—during these phone calls. This alternative method accounted for data for approximately 23 percent of the aircraft owners responding to the survey that estimated 2010 flight hours. These efforts may have improved response rates, but these owners, the aircraft they own, and their use of the aircraft likely differ from owners of a single aircraft. By encouraging responses from a particular set of owners, survey estimates may be biased. Flight hours account for what stakeholders refer to as “exposure” or how often particular types of operations or aircraft are flown. FAA’s flight hour estimates can provide a general sense of the relationship between hours and accidents. However, the methodological and conceptual limitations we have identified call the estimates’ precision into question. As a result, these estimates may not be sufficient for drawing conclusions about small changes in accident rates over time—including FAA’s progress toward its goal to reduce the fatal general aviation accident rate per 100,000 flight hours by 10 percent over 10 years. Implementing alternative means of collecting flight hour data, such as requiring the reporting of aircraft engine-revolution or run-time data, could supplement or replace the data generated through the survey and add rigor to FAA’s flight-hour estimates. Moreover, more precise flight-hour data could allow FAA to better target its safety efforts at subpopulations within the general aviation community. This could include reviewing an industry segment’s characteristics, such as the number of fatal accidents relative to its portion of estimated flight hours and setting a measurable goal for improving safety within that segment. Though FAA has attempted to address the disproportionate number of fatalities within the E-AB community by developing an advisory circular to encourage transition training for pilots, it has not set a specific goal for reducing fatal accidents in that segment. FAA and NTSB, to their credit, have recognized that flight-hour estimates derived from the general aviation survey are imperfect. FAA has discussed ways to improve its flight-hour data, including requiring general aviation owners to report flight hours (in the form of engine-revolution or run-time data) directly to FAA during aircraft registration renewals or at the annual aircraft maintenance check. However, collecting data from these alternative sources has not progressed beyond internal discussions. In addition, organizations representing pilots have generally been opposed to suggestions for increased data collection, which they view as potential impediments to flying. According to these groups, general aviation pilots typically would prefer to avoid additional regulation or federal involvement. In 2005, NTSB explored using alternative approaches to determining annual general aviation activity, approaches that involved using other measures as proxies for hours flown—including the number of active pilots and fuel consumption. However, there are shortcomings to each of these options. As discussed previously, active pilots are defined as those who have current medical certifications; this is not related to whether the pilot actually flew in a given year. And while aviation gas consumption could be a proxy measure for piston engine aircraft activity, some piston- engine aircraft are used for operations other than general aviation. Further, jet fuel consumption cannot reasonably be used as a proxy for the general aviation activity of turbine engine aircraft because of the many types of operations (e.g., air taxi, air ambulance, etc.) flown by these aircraft. In 2008, FAA set a goal to reduce the fatal general aviation accident rate by 10 percent—from a baseline of 1.12 fatal accidents per 100,000 flight hours to 1 fatal accident per 100,000 flight hours—over 10 years, from 2009 to 2018. This single long-term safety goal may mask problems in certain segments of the community. The goal stemmed from FAA’s desire to have a target for its general aviation safety improvement efforts that accounted for changes in flight activity over time. According to FAA officials, they were looking for a goal that was achievable and represented an improved level of safety. FAA did not meet the annual targets for the goal in 2009 and 2010 and, according to projections of flight activity, it does not appear FAA will meet its target in 2011. This singular goal is applied to an industry that is diverse in aircraft types and operations—some of which experience accidents at a higher rate than others. General aviation airplanes differ significantly in size and performance, ranging from single-seat E-AB airplanes to large corporate jets. The types of flying and pilot experience also vary by segment. Some private pilots may only fly a few times each year, while some corporate pilots may keep a schedule similar to that of a commercial airline pilot. In addition, given the expense of flying and maintaining an airplane, downturns in the economy can decrease activity in some segments of general aviation. Changes in flight activity in certain segments of the industry could mask or minimize problems in others and contribute to a rate that does not accurately reflect the trends in the individual segments. (See fig. 4.) For instance, total general aviation flight hours have decreased since the most recent recession, but some segments have declined at a faster rate than others. Personal flying hours in 2010 were 4 percent lower than they were in 2008; corporate flying hours, by comparison, were almost 15 percent lower in 2010 than in 2008. Historically, corporate flying has been one of the safest types of general aviation operations. From 1999 through 2010, corporate airplane operations accounted for just 1 percent of fatal general aviation accidents but 14 percent of flight hours. And from 2008 through 2011, there were no fatal accidents involving corporate airplane operations. As a result, changes in corporate flight activity could result in changes in the overall fatal accident rate that are not necessarily a reflection of changes in safety but rather a reflection of the changing composition of general aviation flight activity. In addition, as previously discussed, the rate is based on estimates of annual general aviation flight hours that may not be reliable. There has been some discussion within FAA and industry about implementing separate goals for each segment of general aviation. According to one stakeholder we interviewed, the types of operations— even among fixed-wing aircraft—differ enough to warrant such a disaggregation. He explained that an hour flown during a corporate operation, during which an advanced aircraft flies from point to point with a significant portion of the time spent en route, is quite different from a pilot flying for pleasure and practicing maneuvers and take-offs and landings—the phase of flight when most accidents occur. However, other stakeholders we interviewed maintained that they all fly under the same operating rules, so it is proper to consider the safety of general aviation as a whole. Given the significant dissimilarities among the various general aviation sectors, along with the varied accident and fatality rates, setting separate safety improvement goals would allow FAA to take a more risk- based approach and target its resources and safety improvement efforts to the unique characteristics of and risks posed by each sector. FAA has embarked on key initiatives to achieve its goal of a 10-percent reduction in the fatal general aviation accident rate per 100,000 flight hours by 2018. One is the long-standing General Aviation Joint Steering Committee (GAJSC), which is led by the Office of Accident Investigation and Prevention. More recently, FAA announced a 5-year strategy to improve general aviation safety that was developed by the General Aviation and Commercial Division of the Flight Standards Service. Although both initiatives work toward the overall goal of reducing general aviation fatalities, the GAJSC is using a data-driven approach to identify risks in general aviation operations and propose mitigations, while the 5- year strategy is composed of a wide variety of activities under four focus areas. In January 2011, FAA renewed the GAJSC, a joint FAA effort with the general aviation industry, the National Aeronautics and Space Administration (NASA), and NTSB that in 1998 was part of the Safer Skies Initiative. Utilizing the model of the Commercial Aviation Safety Team (CAST), the GAJSC’s goal is to focus limited government and industry resources on data-driven risk reductions and solutions to general aviation safety issues. The GAJSC consists of a steering committee that provides, among other things, strategic guidance and membership outreach. It also consists of a safety analysis team (SAT), which determines future areas of study and charters safety studies, among other things. GAJSC officials indicated that they would charter working groups as issues for study were identified. The first working group of the renewed GAJSC focused on loss of control in approach and landing accidents. This area was selected because, according to analyses of NTSB accident data for fatal airplane accidents that occurred from 2001 through 2011and for which NTSB had completed its investigation, loss of control was the number one causal factor. The working group divided into three subgroups—reciprocating non-E-AB aircraft, turbine engine aircraft, and E-AB aircraft—and agreed upon a sample of 30 accidents to be analyzed by each. Despite issues such as a lack of data and the consistency of member participation, the working group developed 83 intervention strategies. These strategies were used to develop the 27 safety enhancements that were presented to the GAJSC for approval. The GAJSC approved 23 of the safety enhancements. The next steps will include developing detailed implementation plans for each of the strategies, with the SAT conducting resource/benefit evaluations of each plan. The SAT then will determine which are the most effective solutions, draft a master strategic plan, and submit the plan to the GAJSC for approval. Implementation is expected to begin upon approval. During implementation, the SAT will be responsible for tracking implementation schedules and levels, tracking the effectiveness of the intervention strategies, and recommending areas for future study. We believe that with the GAJSC’s renewal and adoption of CAST-like methods, it has the potential to contribute to a reduction in general aviation accidents and fatalities over the long term. In March 2011, FAA announced its 5-year strategy to improve general aviation safety. This initiative is a complementary effort to the work of the GAJSC. FAA described the strategy as a nonregulatory approach conducted in partnership with the general aviation community and coordinated across FAA lines of business. The strategy has four focus areas—(1) risk management, (2) safety promotion, (3) outreach and engagement, and (4) training—and includes a 2-year review and the development of validation metrics as each phase of the plan is implemented. FAA initially planned to concentrate its risk management efforts in three areas: (1) the top 10 causes and contributing factors in fatal general aviation accidents—initiated in coordination with the GAJSC, (2) E-AB aircraft, and (3) agricultural operations, which comprise one segment of the general aviation sector. To begin this effort, an FAA team identified the top ten causes of fatal general aviation accidents as well as the leading contributing factors, and provided the information to the GAJSC. The GAJSC, as previously discussed, is using the results of the data analysis to focus its efforts on loss-of-control accidents during approach and landing. For the safety promotion aspect of its 5-year strategy, FAA relies on the FAA Safety Team (FAASTeam). Created in September 2004 as the the FAASTeam consists of 154 education and outreach arm of FAA, FAA employees in eight regional field offices, along with 32 groups and 2,500 individual members from the general aviation industry. In 2011, FAA refocused the FAASTeam—from national and international activities—to promote general aviation safety and technical proficiency through a host of nationwide seminars and contact with pilots at airports. A significant part of the FAASTeam’s new focus is the annual FAA safety standdown—a series of nationwide meetings that highlight issues of concern for general aviation and include industry and GAJSC member participation. The 2012 standdown focused on loss of control, the focus of a GAJSC working group, from three different perspectives: (1) preflight mistakes, (2) aeronautical decision making, and (3) handling a loss of control. In addition, the FAASTeam is conducting workshops for certified flight instructors to increase the quality of training offered to general aviation pilots. The FAASTeam has also been examining intervention strategies by working directly with designated pilot examiners to promote its educational opportunities to all applicants for practical tests. In its outreach and engagement efforts for the 5-year strategy, FAA has briefed aviation associations, type clubs, and flight instructors, and, with the assistance of the Aviation Accreditation Board International, held a symposium on flight training with academia in July 2011. FAA has also reached out to major aviation insurance providers. As a result of these and other efforts, FAA reports that it has strengthened its links with aviation associations while also improving its outreach efforts to type clubs. The training portion of FAA’s 5-year strategy includes chartering an aviation rulemaking committeeon pilot testing standards and training, expanding its focus on certified flight instructors, and revamping the WINGS pilot proficiency program. In September 2011, FAA announced the establishment of an aviation rulemaking committee to address concerns from AOPA, SAFE, and others about the testing and training standards for pilots.flight instructor, private pilot, instrument rating, and commercial pilot certificates. It made nine recommendations to FAA to enhance the pilot- testing and pilot-training processes. The recommendations included establishing a stakeholder body to assist in the development of knowledge test questions and handbook content as well as transitioning to a single testing standard document for the knowledge test. FAA concurred with most of the rulemaking committee’s recommendations. The rulemaking committee focused on the certified To increase its focus on certified flight instructors, FAA is reviewing certified flight instructor recurrent training and renewal requirements. FAA also updated the advisory circular on flight instructor courses and published it in September 2011. The FAASTeam’s voluntary WINGS pilot proficiency program is being revamped to encourage more participation. An FAA-established industry group has been surveying pilots to determine what changes need to be made to the WINGS program. Once the survey is completed, the resulting data will be analyzed and recommendations for changes will be made by the end of fiscal year 2012. FAA officials anticipate implementing changes to the program as funding becomes available in fiscal year 2013. FAA’s 5-year strategy to improve general aviation safety suffers from several shortcomings that hinder its potential for success. First, senior FAA officials acknowledged that there are no specific performance goals or measures for the activities under the 5-year strategy. The officials said that because the goal of the initiative, as a whole, is to change general aviation culture, the strategy’s success will be measured through changes in the general aviation fatal accident rate. They also indicated that they are developing validation metrics as each phase of the plan is implemented. However, successful results-oriented organizations measure their performance at each organizational level by developing performance measures. Without performance goals or measures for the individual initiatives implemented under the 5-year strategy, FAA will not be able to evaluate the success or failure of those activities, regardless of whether the fatal accident rate is reduced. Further, FAA has yet to meet its annual target for the general aviation fatal accident rate goal and may not meet the overall goal by 2018. Therefore, it is even more crucial that FAA determine whether these activities have been successful. Second, the strategy was developed without the initial input of significant stakeholders—the GAJSC and the general aviation industry. Successful agencies we have studied based their strategic planning, to a large extent, on the interests and expectations of their stakeholders, and stakeholder involvement is important to ensure that agencies’ efforts and resources are targeted at the highest priorities. According to officials from the GAJSC and the general aviation industry groups we contacted, although they were briefed on the strategy, they were not consulted in its development and were surprised by the announcement of the strategy. General aviation industry trade groups, type clubs, and other organizations are active in promoting a safety culture and continuous education among their members. For example, AOPA offers numerous seminars each year to educate the pilot community, and EAA offers advisory programs for experimental aircraft builders and pilots. Further, many initiatives are joint efforts of FAA and the industry. Involving stakeholders in strategic planning efforts can help create a basic understanding among the stakeholders of the competing demands that confront most agencies, the limited resources available to them, and how those demands and resources require careful and continuous balancing. FAA officials have indicated that their initial publication of the strategy served as a “straw man” for obtaining industry’s input and that there has been industry acceptance of the strategy as demonstrated by various industry groups’ development of plans and programs supporting the strategy. However, a lack of industry input into the development and announcement of the strategy jeopardizes its prospects for acceptance and success. This may be indicated in the current perspective of two industry groups—which is that the best use of industry resources to improve general aviation safety is through the work of the GAJSC. Third, the FAASTeam, which will be the main vehicle for promoting the 5- year strategy to the industry, lacks the confidence of two significant general aviation industry stakeholders we interviewed, and its reorganization has not been completed. These industry stakeholders indicated that there is inconsistency in the focus of the FAASTeam. One stakeholder noted that industry “struggles to understand the role of the FAASTeam,” and the other stated that the FAASTeam is “well intentioned, but unfocused.” In addition, FAA initially planned to reorganize the FAASTeam to reduce the number of volunteers to a strong core group and to include a national FAASTeam located in Washington, D.C. However, a senior FAA official recently indicated that the restructuring of the FAASTeam is in flux and that the plan to reduce the number of volunteers to a strong core group does not begin until 2013. We believe that until there is a strong performance management structure, input and buy-in from industry, and a respected and organized FAASTeam, the effectiveness of the 5-year strategy will be in jeopardy. Formed a rulemaking committee to recommend revisions to the small airplane airworthiness standards: In August 2011, FAA chartered a rulemaking committee to reorganize part 23—which promulgates airworthiness standards for small airplanes—according to airplane performance and complexity criteria as opposed to the traditional criteria of airplane weight and propulsion. The goals of this rulemaking committee include increasing safety and decreasing certification costs. Co-chaired by the manager of FAA’s Small Airplane Directorate, the rulemaking committee includes members representing other sections of the Aircraft Certification and Flight Standards Services as well as members from industry groups, manufacturers, and foreign aviation authorities. The committee is expected to complete its work by the summer of 2013. Encouraging adoption of a safety management system (SMS): In guidance issued in April 2011, FAA encouraged general aviation business and corporate operators to develop and implement SMS. Visual flight rules govern the procedures for conducting flight under visual conditions, as opposed to instrument flight rules, which govern the procedures for conducting flights using instruments. year 2007, the Weather Camera Program has funded the procurement and installation of 182 weather camera sites in Alaska. The cameras provide near real time video images of sky conditions at airports, mountain passes, and strategic VFR locations, such as high- use air routes, to enhance pilots’ situational awareness. According to FAA, this new capability is providing measurable reductions in weather-related VFR accidents in Alaska. FAA’s goal is to install a total of 221 weather camera sites. According to FAA, new technologies such as inflatable restraints (air bags), ballistic parachutes, weather in the cockpit, angle-of-attack indicators, and terrain avoidance equipment could significantly reduce general aviation fatalities. Angle of attack indicators and inflatable restraints have the greatest likelihood of significantly improving safety. Angle-of-attack indicators provide the pilot with a visual aid to prevent loss of control of the aircraft. Previously, cost and complexity of indicators limited their use to the military and commercial aircraft. FAA has streamlined the approval of angle-of-attack indicators for general aviation aircraft and is working to promote the retrofit of the existing fleet. FAA is also streamlining the certification and installation of inflatable restraints with the goal of making all general aviation aircraft eligible for installation. Further, FAA is working with manufacturers to define equipage requirements and support the Next Generation Air Transportation System (NextGen)—a new satellite-based air traffic management system that by 2025 will replace the current radar-based system—by streamlining the certification and installation of NextGen technologies. Some industry experts told us, however, that there might not be future opportunities to significantly improve general aviation safety with the aid of technology since most accidents are still attributed to pilot error. To further reduce the number of fatal general aviation accidents, FAA needs to effectively target its accident mitigations, as it is attempting to do through the GAJSC. The agency’s ability to do so, however, is limited by a lack of pilot data. For instance, FAA does not maintain certain key information about general aviation pilots, including how many are actively flying each year and whether they participate in recurrent training other than FAA’s own WINGS program. Without this information, FAA cannot determine the potential effect of the various sources and types of training on pilot behavior, competency, and the likelihood of an accident. The lack of pilot data also makes it difficult to identify the root causes of accidents attributed to pilot error and determine appropriate risk mitigation opportunities. The annual survey FAA uses for collecting general aviation flight-activity data suffers from significant limitations—limitations that call into question the resulting activity estimates FAA produces as well as the accident rates calculated by NTSB. Though FAA has improved the survey over the years, our concerns remain because the survey continues to experience response rates below 50 percent and relies on the record-keeping habits and memories of survey respondents who sometimes have to recall details that occurred more than 12 months earlier. Further, other methods for obtaining general aviation flight-activity data have encountered resistance from the industry. Without a more accurate reporting of general aviation flight activity, such as requiring the reporting of flight hours at certain intervals—e.g., during registration renewals or annual maintenance inspections—FAA lacks assurance that it is basing its policy decisions on a true measure of general aviation trends, and NTSB lacks assurance that its calculations of accident and fatality rates accurately represent the state of general aviation safety. Given the diversity of the general aviation community—illustrated, for example by the wide variety of aircraft in the fleet and the varying nonfatal and fatal accident rates among the general aviation segments, the adoption of a singular agency goal–-a 10 percent reduction in the general aviation fatal accident rate per 100,000 flight hours by 2018 is not the most effective risk-based tool for achieving general aviation safety gains. The goal does not take into account the variety of general aviation operations or the risks associated with each. For example, one hour flown during a personal operation is not the same as one hour flown during a corporate operation. Also, economic conditions affect each segment differently, making it difficult to discern if a change in the accident rate is an indication of a change in the safety of the industry. If the goal is reached, the overall success might mask ongoing safety issues in one or more segments of the community. FAA officials have indicated that the success of the 5-year strategy— which is composed of numerous initiatives—will be measured through changes in the general aviation fatal accident rate. However, successful results-oriented organizations measure their performance at each organizational level by developing performance measures. For this reason, we think it is important for FAA to develop performance measures for the significant initiatives underlying the 5-year strategy. This is important because if FAA does not measure the performance of the significant underlying initiatives, it will not be able to determine whether the initiatives were effective in their own right. In addition, in order for the FAASTeam to be successful in its promotion of the 5-year strategy, it must be well respected within the general aviation community. We are not making a recommendation regarding the FAASTeam at this time since plans for restructuring it are in flux and its volunteer force realignment is not scheduled to begin until 2013. To enhance FAA’s efforts to improve general aviation safety, we recommend that the Secretary of Transportation direct the FAA Administrator to take the following four actions: To expand the data available for root cause analyses of general aviation accidents and other purposes, collect and maintain data on each certificated pilot’s recurrent training, and update the data at regular intervals. Improve measures of general aviation activity by requiring the collection of the number of hours that general aviation aircraft fly over a period of time (flight hours). FAA should explore ways to do this that minimize the impact on the general aviation community, such as by collecting the data at regular events (e.g., during registration renewals or at annual maintenance inspections) that are already required. To ensure that ongoing safety issues are addressed, set specific general aviation safety improvement goals—such as targets for fatal accident reductions—for individual industry segments using a data- driven, risk management approach. To determine whether the programs and activities underlying the 5- year strategy are successful and if additional actions are needed, develop performance measures for each significant program and activity underlying the 5-year strategy. We provided the Department of Transportation (DOT) with a draft of this report for review and comment. DOT officials agreed to consider our recommendations and provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, the Chairman of NTSB, and interested parties. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me on (202) 512-2834 or at dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Our objective was to conduct a comprehensive review of general aviation safety. To do so, we addressed the following questions: (1) what are the characteristics and trends in general aviation accidents from 1999 to 2011 and (2) what actions have been taken by the Federal Aviation Administration (FAA) to improve general aviation safety? To identify the characteristics of and trends in general aviation accidents, we conducted a data analysis using the National Transportation Safety Board’s (NTSB) Aviation Accident Database. We limited our analysis to accidents involving airplanes operating under Part 91 of the Federal Aviation Regulations that occurred from January 1, 1999, through December 31, 2011, in the U.S. We excluded accidents that occurred in U.S. territories, possessions, and international waters. To assess the reliability of the NTSB data, we reviewed documentation on data collection efforts and quality assurance processes, talked to knowledgeable NTSB officials about the data, and checked the data for completeness and reasonableness. We determined that these data were sufficiently reliable for the descriptive and comparative analyses used in this report. To supplement our analysis of the NTSB accident data, we also analyzed FAA’s general aviation flight-hour estimates for 1999 through 2010 and estimated active pilot data for 2011. To assess the reliability of these data, we reviewed documentation on data collection efforts and quality assurance processes and talked to knowledgeable FAA officials. In assessing the reliability of the flight-hour estimates, we also spoke with the contractors responsible for executing the survey that yielded these estimates, the General Aviation and Part 135 Survey. We determined that the flight-hour data and the active pilot data were sufficiently reliable for the purposes of this engagement. Specifically, these data elements were sufficiently reliable to provide meaningful context for the numbers and characteristics of accidents that we report. However, we also determined that because of the methodological limitations identified—a low response rate and the potential for nonresponse bias—the flight-hour estimates developed from the General Aviation and Part 135 Survey may not have the precision necessary to measure small changes in the general aviation accident rate over time. To identify actions FAA and others have taken to improve general aviation safety, we reviewed our prior reports as well as documents and reports from FAA, NTSB, NASA, and general aviation industry trade and other groups, including the Aircraft Owners and Pilots Association (AOPA), the Experimental Aircraft Association (EAA), and the Society of Aviation and Flight Educators (SAFE); FAA orders, notices, advisory circulars; and applicable laws and regulations. We also determined the roles and responsibilities of FAA and NTSB in collecting and reporting general aviation safety data. In addition to interviewing officials from the various FAA offices and divisions responsible for general aviation safety, we interviewed aviation experts affiliated with various aviation industry organizations. (See table 4.) To obtain additional insight into the general aviation industry, we attended the September 2011 AOPA Aviation Summit in Hartford, Connecticut; the March 2012 Annual FAA Aviation Forecast Conference in Washington, D.C.; the February 2012 Northwest Aviation Conference in Puyallup, Washington; and the June 2012 NTSB General Aviation Forum in Washington, D.C. In addition to the contact named above, the following individuals made important contributions to this report: H. Brandon Haller, Assistant Director; Pamela Vines; Jessica Wintfeld; Russ Burnett; Bert Japikse; Delwen Jones; Josh Ormond; and Jeff Tessin. | Although the U.S. aviation system is one of the safest in the world, hundreds of fatalities occur each year in general aviationwhich includes all forms of aviation except commercial and military. The general aviation industry is composed of a diverse fleet of over 220,000 aircraft that conduct a wide variety of operationsfrom personal pleasure flights in small, piston aircraft to worldwide professionally piloted corporate flights in turbine-powered aircraft. According to 2011 National Transportation Safety Board (NTSB) data, 92 percent of that years fatal accidents occurred in general aviation. The majority of general aviation accidents are attributed to pilot error. GAO was asked to examine the (1) characteristics of and trends in general 2011 and (2) recent actions taken by FAA to improve general aviation safety. GAO analyzed NTSB accident data, reviewed government and industry studies and other documents, and interviewed FAA and NTSB officials and industry stakeholders. The number of nonfatal and fatal general aviation accidents decreased from 1999 through 2011; more than 200 fatal accidents occurred in each of those years. Airplanesparticularly single-engine piston airplanesflying personal operations were most often involved in accidents. Most general aviation accidents are attributed to pilot error and involved a loss of aircraft control. Some segments of the industry experienced accidents disproportionately to their total estimated annual flight hours. For example, among the airplane categories we reviewed, experimental amateur-built airplanes were involved in 21 percent of the fatal accidents but accounted for only 4 percent of the estimated annual flight hours. In another example, corporate operations were involved in about 1 percent of fatal accidents while accounting for 14 percent of estimated annual flight hours. We can draw some conclusions about general aviation accident characteristics, but limitations in flight activity and other data preclude a confident assessment of general aviation safety. The Federal Aviation Administrations (FAA) survey of general aviation operators, on which the agency bases its annual flight-hour estimates, continues to suffer from methodological and conceptual limitations, even with FAAs efforts to improve it over the years. To obtain more reliable data, FAA has discussed requiring that flight-hour data be reported, such as during annual aircraft maintenance inspections. FAA has set a goal to reduce the fatal general aviation accident rate per 100,000 flight hours by 10 percent from 2009 to 2018. However, given the diversity of the industry and shortcomings in the flight activity data, this goal is not sufficient for achieving reductions in fatality rates among the riskier segments of general aviation. Further, achieving the goal could mask continuing safety issues in segments of the community. FAA has embarked on several initiatives to meet its goal of reducing the fatal general aviation accident rate by 2018. These include the renewal of the General Aviation Joint Steering Committee (GAJSC) with a data-driven approach and the implementation of the Flight Standards Services 5-year strategy. The GAJSC, a government-industry partnership, focuses on analyzing general aviation accident data to develop effective intervention strategies. The 5-year strategy involves numerous initiatives under four focus areas: (1) risk management, (2) outreach which is composed of FAA staff and industry volunteers, will be responsible for carrying out significant portions of the strategy. While the GAJSCs efforts are modeled on an approach deemed successful in contributing to a reduction in fatal jeopardize its potential for success. For example, the strategy lacks performance measures for the significant activities that comprise it. Without a strong performance management structure, FAA will not be able to determine the success or failure of the significant activities that underlie the 5-year strategy. GAO recommends, among other things, that FAA require the collection of general aviation aircraft flight-hour data in ways that minimize the impact on the general aviation community, set safety improvement goals for individual general aviation industry segments, and develop performance measures for the significant activities underlying the 5-year strategy. Department of Transportation officials agreed to consider GAOs recommendations and provided technical comments, which GAO incorporated as appropriate. |
VA pays monthly disability compensation to veterans with service- connected disabilities (i.e., injuries or diseases incurred or aggravated while on active military duty) according to the severity of the disability. VBA staff in 57 regional offices process disability compensation claims.These claims processors include Veterans Service Representatives (VSR) who gather evidence needed to determine entitlement, and Rating Veterans Service Representatives (RVSR) who decide entitlement and the rating percentage. Veterans may claim more than one medical condition, and a rating percentage is assigned for each claimed medical condition, as well as for the claim overall.decided more than 1 million compensation claims. In fiscal year 2013, VBA Since fiscal year 1999, VBA has used STAR to measure the decisional accuracy of disability compensation claims. Through the STAR process, VBA reviews a stratified random sample of completed claims, and certified reviewers use a checklist to assess specific aspects of each claim. are randomly sampled each month and the data are used to produce estimates of the accuracy of all completed claims. VA reports national estimates of accuracy from its STAR reviews to Congress and the public through its annual performance and accountability report and annual budget submission. VBA also produces regional office accuracy estimates, which it uses to manage the program. Regional office and national accuracy rates are reported in a publicly available performance database, the Aspire dashboard. The STAR review has two major components. The benefit entitlement review assesses whether the correct steps were followed in addressing all issues in the claim, collecting appropriate evidence, and whether the resulting decision was correct, including effective dates and payment rates. Accuracy performance measures are calculated based on the results of the benefit entitlement review. The STAR review also assesses whether claims processors appropriately documented the decision and notified claimants. new issue-based measure. By comparison, under the existing claim- based measure, the claim would be counted as 0 percent accurate unless the error did not affect benefits when considered in the context of the whole claim. In March 2014, VBA reported a national estimate of issue- based accuracy in its fiscal year 2015 annual budget submission and plans to update this estimate in VA’s next performance and accountability report. VBA also produces issue-based estimates by regional office, and reports them in the Aspire dashboard. For fiscal year 2013, the regional office claim-based accuracy rates ranged from an estimated 78.4 to 96.8 percent, and the issue-based accuracy rates ranged from an estimated 87.0 to 98.7 percent. Beyond STAR, VBA has programs for conducting regional office quality reviews and for measuring the consistency of decisions. In March 2012, VBA established quality review teams (QRT) with one at each regional office. A QRT conducts individual quality reviews of claims processors’ work for performance assessment purposes. The QRT also conducts in- process reviews before claims are finalized to help prevent inaccurate decisions by identifying specific types of common errors. Such reviews also serve as learning experiences for staff members. Since fiscal year 2008, VBA has also conducted studies to assess the consistency of disability claims decisions across regional offices. Initially, this initiative used inter-rater reliability (IRR) studies to assess the extent to which a cross-section of claims processors from all regional offices agree on an eligibility determination when reviewing the entire body of evidence from the same claim. In 2013, VBA revised its approach and began using questionnaires as its primary means for assessing consistency. A questionnaire includes a brief scenario on a specific medical condition for which claims processors must correctly answer several multiple-choice questions. When calculating accuracy rates, VBA does not always follow generally accepted statistical practices. For example, VBA does not weight the results of its STAR reviews to reflect its approach to selecting claims by regional office, which can affect the accuracy of estimates. According to our analysis of VBA data, weighting would have resulted in a small change to VBA’s nationwide claim-based accuracy rate for fiscal year 2013—from 89.5 to 89.1 percent. At the regional level, 29 of the 57 offices would have experienced a somewhat greater increase or decrease in their accuracy rates. Without taking weighting into consideration, regional office accuracy performance may be misleading and VBA management may focus corrective action or positive recognition on the wrong offices. For example, by taking weighting into account for the 57 regional offices in fiscal year 2013, the Reno regional office would have improved in relative accuracy by 12 places (from 34th to 22nd place), whereas the Los Angeles office would have declined in relative accuracy by 10 places (from 46th to 56th place) (see fig. 1). VBA also does not calculate the confidence intervals associated with the accuracy estimates that it generates, which prevents a complete understanding of trends over time and comparisons among offices. Accuracy estimates for different regional offices, or for the same office over time, are considered statistically different from each other when their confidence intervals do not overlap. As such, meaningful comparisons could be made on the basis of our analysis between, for example, Fort Harrison’s estimated claim-based accuracy rate (ranked #1) and New York’s estimated claim-based accuracy rate (ranked #36) because their confidence intervals did not overlap in fiscal year 2013 (see fig. 2). Conversely, comparisons between Fort Harrison’s and Milwaukee’s or Pittsburgh’s estimated claim-based accuracy rates (ranked #2 and #35 respectively)—which had overlapping confidence intervals in fiscal year 2013—require a statistical test to determine if their differences are statistically meaningful. In effect, the claim-based accuracy rate of Fort Harrison and those of the regional offices with the next 34 highest reported accuracy rates may not be meaningfully different despite being ranked 1 through 35 of 57. Similarly, according to agency officials, VBA also does not calculate the confidence intervals associated with its newer issue-based accuracy estimates, which prevents meaningful comparisons between those estimates as well. Because VBA produces issue-based estimates using the same sample drawn to produce claim-based estimates, it would have to take extra steps to calculate the associated confidence intervals.computing the confidence intervals associated with issue-based As with the claim-based accuracy estimates, not estimates limits VBA’s ability to monitor its regional offices’ relative performance and its overall performance over time. VBA’s approach to measuring accuracy is also inefficient because it reviews more claims than are statistically required to estimate accuracy. VBA randomly selects about 21 claims per month from each of its regional offices for STAR review, regardless of the offices’ varying workloads and historical accuracy rates. According to VBA, this uniform approach allows the agency to achieve a desired level of precision of its accuracy estimates for each regional office. However, accepted statistical practices would allow for fewer cases to be reviewed at regional offices where the number of claims processed has been relatively small or accuracy has been high. According to our analysis of fiscal year 2013 regional office workload and accuracy results, VBA could reduce the overall number of claims it reviews annually by about 39 percent (over 5,000 claims) and still achieve its desired precision for its regional office accuracy estimates. More efficient sampling could allow VBA to select fewer cases for review and free up limited resources for other important quality assurance activities, such as additional targeted accuracy reviews on specific types of error-prone or complex claims. Specifically, reviewing about 5,000 fewer claims could free up about 1,000 staff days because, according to VBA officials, STAR staff review at least 5 claims per day. Calculating weighted estimates and confidence intervals, and adjusting sampling according to shifting workloads and accuracy rates, requires use of statistical methodology. According to VBA officials we interviewed, although STAR management used a statistician to help develop the way in which they measure accuracy, it currently does not use a statistician to, for example, weight STAR results and calculate confidence intervals for accuracy estimates. Further, VBA officials said they did not consult a statistician when developing the new issue-based accuracy measure, but rather relied on the same sampling methodology and approach for estimating accuracy as for the claim-based measure. We have previously reported that to be useful, performance information must meet users’ needs for completeness, accuracy, consistency, and validity, among other factors. In response to our draft July 2014 testimony based on preliminary work, VBA officials stated they are exploring alternatives to their current methodology for estimating accuracy. Beyond not following generally accepted statistical practices, VBA’s STAR review systematically excludes certain claims, which may inflate accuracy rate estimates. Specifically, according to VBA officials, when a claim moves from one regional office to another, because a veteran has moved or workloads are redistributed, the database VBA uses to select claims for STAR review does not always reflect the office responsible for As a result, STAR staff making the final determination for the claim. often select for review, then subsequently de-select, claims that have changed regional office jurisdiction. Of the 14,286 rating claims randomly selected initially by VBA for review in fiscal year 2013, about 10 percent were de-selected because of a change in jurisdiction and replaced with other randomly selected claims. Those de-selected claims are not eligible for STAR review for the regional office that was ultimately responsible for the claim, thereby causing an underrepresentation of these claims in the STAR sample. Such underrepresentation may inflate VBA’s reported accuracy rate because redistributed claims have historically had lower accuracy rates than non-redistributed claims. In responding to our draft report, VBA indicated it is revising its procedures to ensure that claims selected for STAR review are included in the accuracy rate of the responsible regional office regardless of whether a change of jurisdiction occurred. Federal agencies should report clear performance information to the Congress and the public to ensure that the information is useful for decision making. In prior work, we identified clarity as a key attribute to a successful performance measure, meaning that the measure is clearly stated and its associated methodology is identified. Measures that lack clarity may confuse or mislead users, and not provide a good picture of how well the agency is performing. We have also reported on best practices in implementing related federal performance reporting requirements, such as those in the GPRA Modernization Act of 2010. Specifically, agencies must disclose information about the accuracy and validity of their performance information in their performance plans, including the sources for their data and actions to address any limitations. VBA’s accuracy reporting lacks methodological details that would help users understand the distinction between its two accuracy measures and their associated limitations. While VBA’s new issue-based measure provides some additional perspective on the quality of claim decisions to date, VBA has not fully explained in its public reports how the issue- based and claim-based measures differ. For example, the issue-based measure tends to be higher than the claim-based measure because the former allows for claims to be considered partially correct, whereas the claim-based measure does not. According to VBA officials, the issue- based estimate provides a better measure of quality because veterans’ Our analysis claims have increasingly included multiple medical issues.of STAR data confirms that as the number of issues per claim increases, the chance of at least one issue being decided incorrectly within a single claim increases because there are more opportunities for error (see fig. 3). However, VA did not report in its fiscal year 2015 budget request how these measures are calculated and why the issue-based measure might be higher than the claim-based measure. VA has also not reported these distinctions in its Aspire dashboard. VBA also counts claims processing errors differently under its claim- based measure than it does under its issue-based measure but does not report these distinctions, which raises questions about the transparency and consistency of VBA’s accuracy measures. For both measures, VBA differentiates between benefit entitlement errors that may financially affect the veteran and other errors, such as documentation and administrative errors that do not financially affect the veteran. For claim-based accuracy, VBA counts errors that financially affect the veteran now, but does not count errors that may financially affect the veteran in the future, although it works to correct both types of errors. For example, if one of several claimed medical conditions was rated incorrectly (e.g., 10 percent instead of 20 percent), but this error did not immediately affect the overall rating of the claim, VBA would not consider the claim in error because it did not affect the benefits that the veteran would receive. For the issue-based accuracy measure, however, VBA would count this as an error even if the error did not immediately affect the veteran’s benefits. Unlike claim-based accuracy, issue-based accuracy may also include errors that would never affect future payments. For example, an incorrect effective date that is within the same month as the correct effective date does not affect benefits, but is counted as an error in VBA’s issue-based accuracy measure. Conversely, according to VBA officials, this is not counted as an error in its claim-based measure. According to our analysis of STAR data, up to 6.9 percent of reviewed claims in fiscal year 2013 had these types of errors (i.e., benefit entitlement errors that do not immediately and may never affect benefits), and if they were all counted as errors, VBA’s unweighted claim-based accuracy rate would have decreased by about 2 percent. Further, VA has not explained in public reports that its accuracy measures are estimates that have distinct confidence intervals and limitations. Users should be aware of these confidence intervals to make meaningful comparisons, for example, between the two measures or over time for the same measure. In terms of each accuracy measure’s limitations, the claim-based measure does not provide a sense of the proportion of issues that the agency decides correctly because the measure counts an entire claim as incorrect if any error is found. On the other hand, the issue-based measure does not provide a sense of the proportion of claims that the agency decides with no errors. In addition to its STAR reviews, VBA’s quality assurance framework includes other complementary activities, which have been enhanced to help meet its goal of 98 percent accuracy in fiscal year 2015. Specifically, VBA (1) established quality review teams (QRT) in March 2012 in regional offices as a means of strengthening its focus on quality where claims are processed, and (2) enhanced efforts to assess the consistency of decisions. Although regional offices were previously responsible for assessing individual performance, QRTs represent a departure from the past because QRT personnel are dedicated primarily to performing these and other local quality reviews. In addition, VBA requires QRT staff to pass a skills certification test annually—similar to VBA requirements for STAR staff and in contrast to requirements for claims processors who must pass a test every 2 years. In July 2013, VBA issued national guidance to ensure consistent QRT roles and practices across regional offices. For example, it included guidance on selecting individual quality review claim samples and conducting additional reviews for claims processors who do not meet their accuracy goals. In addition to conducting individual quality reviews, QRT personnel are charged with conducting in-process reviews of claims that are not yet finalized, looking for specific types of common errors. Quality reviewers are also responsible for providing feedback to claims processors on the results of their quality reviews, typically as reviews are completed, including formal feedback from the results of individual quality reviews and more informal feedback from the results of in-process reviews. In addition, at the four offices we contacted, quality reviewers are available to answer questions and provide guidance to claims processors as needed. VBA’s efforts to assess consistency of claims decisions have also expanded in recent years. Up until 2013, VBA largely relied on inter-rater reliability (IRR) studies to assess consistency, which to date have been time consuming and resource intensive. Claims processors typically required about 4 hours to review an entire claim. The process was administered by proctors in the regional offices and the results were hand-graded by national VBA staff. Given the resources involved, IRR studies have been typically limited to 300-500 (about 25-30 percent) claims processors, randomly selected from the regional offices. In 2009, VBA expanded its consistency program to include questionnaires, which it now relies on more heavily to assess consistency. The more streamlined consistency questionnaires require less staff time to complete because, in addition to a brief scenario on a specific condition, participants have 10 or fewer multiple-choice questions to answer. The questionnaires are administered electronically through the VA Talent Management System, removing the need to proctor or hand-grade the tests, which has allowed VBA to significantly increase employee participation. A recent consistency questionnaire was taken by about 3,000 claims processing employees— representing all employees responsible for rating claims. Further, VBA now administers these studies more frequently, from about 3 to 24 per year. According to VBA officials, they plan to further expand the use of consistency studies from two questionnaires per month to six to eight per month, pending approval of additional quality assurance staff. VBA also has taken steps to coordinate its quality assurance efforts in several ways, such as systematically disseminating information on national accuracy and consistency results and trends to regional office management and QRTs, which in turn share this information with claims processing staff. With respect to STAR, in addition to receiving monthly updates on overall accuracy performance, regional offices receive quarterly reports with analyses of accuracy performance including information by error type. QRT reviewers also participate in monthly conference calls with STAR staff during which they discuss error trend information. While claims processing staff learn about errors they made on claims directly from STAR, managers or QRT members at each of the regional offices we contacted noted that they also share STAR trend data with claims processors during periodic training focused on STAR error trends. With respect to consistency studies, regional offices receive national results; regional office-specific results; and, since February 2014, individual staff results. Officials at each of the four regional offices we visited told us QRT staff share the results of consistency studies with staff and inform claims processors of the correct answers to the questions. Coordination also occurs when QRT personnel disseminate guidance and support regional office training based on error trends identified through STAR and other quality assurance activities. Two of the four offices we contacted cited instances where they have used consistency study results for training purposes. At one office, the results from a consistency study were used to provide training on when to request an exam for certain conditions, such as tinnitus. In general, at each of the four offices, officials told us that QRT reviewers conduct, or work with regional office training coordinators to conduct, periodic training forums for claims processors. Regional offices we contacted also supplement training with other communications informed by quality review results. For example, QRTs at three of the four regional offices we contacted produce periodic newsletters for regional office claims processors, which include guidance based on errors found in all types of reviews. Specifically, at one office, a newsletter was used to disseminate guidance on ensuring that a rating decision addresses all issues in a claim. The need for this guidance was identified on the basis of STAR and local quality review results. Lastly, VBA coordinates its quality assurance activities by using STAR results to guide other quality assurance efforts. According to VBA officials, the agency has used STAR data to identify error trends associated with specific medical issues, which in turn were used to target efforts to assess consistency of decision-making related to those issues. Recent examples are (1) the August 2013 IRR study, which examined rating percentages and effective dates assigned for diabetes mellitus (including peripheral neuropathy); and (2) a February 2014 study on obtaining correct disability evaluations on certain musculoskeletal and respiratory conditions. In addition, according to VBA, the focus of in- process reviews performed by QRTs has been guided by STAR error trend data. VBA established in-process reviews in March 2012 to help the QRTs identify and prevent claim development errors related to medical examinations and opinions, which it described as the most common error type. More recently, VBA has added two more common error types— incorrect rating percentages and incorrect effective benefit dates—to its in-process review efforts. VBA officials stated that they may add other common error types based on future STAR error analyses. While QRTs reflect VBA’s increased focus on quality, during our site visits we identified shortcomings in QRT practices and implementation that could reduce their effectiveness. Specifically, we identified the following shortcomings: (1) the exclusion of claims processed during overtime to assess individual performance; (2) the inability to correct errors identified before a claim is finalized in certain situations; and (3) a lack of pre- testing of consistency questionnaires. Regarding the first shortcoming, we learned that three of the four offices we contacted had agreements with their local unions that prevented QRT personnel from reviewing claims processed during overtime to assess individual performance. As a result, those regional offices were limited in their ability to address issues with the quality of work performed during overtime. Centrally, VBA officials did not know which or how many regional offices excluded claims processed during overtime, or the extent to which excluding cases worked during overtime occurred nationally. According to VBA data, claims processed on overtime represented about 10 percent of rating-related claims completed nationally in fiscal year 2013. After we reported this finding, VBA issued guidance in August 2014 to regional offices stipulating inclusion of claims processed on overtime, and that the regional offices work with their local unions to rescind any agreements that exclude such claims from review. Second, officials at four regional offices we contacted told us that they face a challenge in conducting individual quality and in-process reviews as expected because VBA’s Veterans Benefits Management System lacks the capability to briefly pause the process and prevent claims from being completed while a review is still underway. VBA officials acknowledged that this was a problem for regional offices in completing reviews, based on anecdotal information from regional offices, but did not have information on the extent to which this occurred. VBA officials noted that reviews could be performed after a claim is completed; however, if an error is found, the regional office might need to rework the claim and provide the veteran with a revised decision. The officials also noted that VBA is working toward modifying its Veterans Benefits Management System to address this issue, but is at the initial planning stage of gathering requirements and could not provide a time frame for completion. Thirdly, although VBA has developed a more streamlined approach to measuring consistency, VBA officials told us that consistency questionnaires were developed and implemented without any pre-testing, which would have helped the agency determine whether the test questions were appropriate for field staff and were accurately measuring consistency. Pre-testing is a generally accepted practice in sound questionnaire development for examining the clarity of questions or the validity of the questionnaire results. In the course of our review, VBA quality assurance officials noted that they plan to begin pre-testing consistency questionnaires as a part of a new development process. Specifically, after each questionnaire has been developed, two to three quality assurance staff who have claims processing experience, but were not involved in the questionnaire’s development, would be targeted to pre-test it. Quality assurance staff responsible for the consistency studies would then adjust the questionnaire if necessary before it is administered widely. While initially slated to occur in July 2014, VBA quality assurance staff now anticipate pre-testing to begin in September 2014. Beyond these implementation shortcomings, staff in each of the four offices we contacted said that several key supports were not sufficiently updated to help quality review staff and claims processors do their jobs efficiently and effectively. Staff at these offices consistently described persistent problems with central guidance, training, and data systems. Guidance: Federal internal control standards highlight the need for pertinent information being captured and distributed in a form that allows people to perform their duties efficiently. However, regional office quality review staff said they face challenges locating the most current guidance among all of the information they are provided. Managers or staff at each of the regional offices we contacted said that VBA’s policy manuals are outdated. As a result, staff must search numerous sources of guidance to locate current policy, which is time- consuming and difficult. This, in turn, could affect the accuracy with which they decide claims. One office established a spreadsheet to consolidate guidance because the sources were not readily available to claims processors. VBA officials acknowledged that there are several ways it provides guidance to regional offices. In addition to the existence of relevant regulations and VBA’s policy and procedures manual, VBA provides guidance to claims processors through policy and procedures letters, monthly quality calls and notes from these calls, various bulletins, and training letters and other materials maintained on VBA’s intranet site. While agreeing that having multiple sources of guidance could be confusing to staff, VBA officials noted they face challenges in updating the policy manual and other available guidance materials to ensure that they are as current as possible. After we reported on this issue, VBA officials noted that they are considering streamlining the types of guidance provided. They also plan to develop a system of consolidated links to guidance documents by alphabetized topic to help claims processors access the information more efficiently; however, VBA officials acknowledge that developing a single repository will be a challenging project and have not yet dedicated adequate resources for this effort. Training: Staff in the offices we contacted also said that in some cases national training has not been updated to reflect the most current guidance, which in turn makes it difficult to provide claims processors with the information they need to avoid future errors. For example, staff from one regional office noted that training modules on an error-prone issue—Individual Unemployability and related effective dates of benefits—had not been updated to reflect all new guidance, the sources of which included conference calls, guidance letters, and frequently asked questions compiled by VBA’s central office. Further, officials at regional offices we contacted expressed concern that VBA limits their flexibility to update out-of-date course materials. In response to these concerns, VBA training officials explained that that they are continually updating national training to reflect new guidance, but how long it takes is a function of the extent of the policy change. These officials noted that updating the Individual Unemployability training was particularly delayed because of numerous, unanticipated changes in policy and related guidance that resulted in their setting aside previously updated course materials and starting over. VBA training officials also explained that while VBA does not allow changes to the contents of courses in its catalog, regional offices can propose courses for the catalog, based on their needs identified through quality reviews. Data systems: Regional office quality review staff also told us that they are required to log errors into three systems or databases that do not “speak to one another” and two lack the capability to fully track errors trends, thereby limiting their ability to take corrective actions. At the regional office level, quality assurance information is entered into three different databases or systems. Staff at each of the four offices we contacted said that the Automated Standardized Performance Elements Nationwide system used for tracking individual accuracy for performance management purposes lacks functionality to create reports on error trends by claimed medical issue or reasons for specific types of errors. As a result, three offices maintain separate spreadsheets to identify error trends related to individual accuracy. Regional office staff also noted that one of the two systems used to track in-process reviews does not help track error trends, for example, by employee, resulting in two offices maintaining additional spreadsheets to track this information. At the national level, VBA central office has made some improvements in reporting and now has the ability to analyze regional office information on errors by medical issue. According to VBA officials, they share this information with regional office managers and quality staff during training calls. VBA officials stated that a planned replacement for its Automated Standardized Performance Elements Nationwide system would have addressed reporting limitations at the local level, but was halted. As of September 2014, VBA did not have a timeframe for restarting the process for acquiring a new system. Finally, VBA’s efforts to evaluate the effectiveness of its quality assurance activities have been limited. Specifically, VBA officials told us that although they have not seen an increase in the national accuracy rate in fiscal year 2014, the number of errors related to claim development has declined, demonstrating the success of QRT reviews and training in targeting these errors. Also, VBA identified 13 regional offices whose issue-based accuracy rates improved between the first and third quarters of fiscal year 2014, attributing these improvements to actions taken by However, it was not clear quality assurance staff in fiscal year 2014.from the documentation VBA provided whether and how it monitored the effectiveness of these actions for all regional offices. With respect to consistency studies, VBA also has not evaluated—and lacks plans to evaluate—the efficacy of using consistency questionnaires relative to the more resource-intensive IRR studies. According to a VBA official, the consistency questionnaires have helped identify regional offices and individuals in need of further training on the basis of the percentage of incorrect answers, as well as the need for national training. However, officials could not provide data or evaluations indicating that consistency questionnaires have improved accuracy rates in the areas studied. VBA officials noted that they are considering a new data system that would combine all local and national quality assurance data—including STAR, in-process reviews, and individual quality reviews—and allow for more robust analyses of root causes of errors. Specifically, they expect the system will show relationships across the results of various quality assurance reviews to determine employee competence with various aspects of claims processing. According to VBA officials, this system would also enable them to more easily evaluate the effectiveness of specific quality assurance efforts. Evaluation can help to determine the “value added” of the expenditure of federal resources or to learn how to improve performance—or both. It can also play a key role in strategic planning and in program management, informing both program design and execution. Continuous monitoring also helps to ensure that progress is sustained over time. However, VBA officials indicated that this proposal is still in the conceptual phase and requires final approval for funding and resources. VBA’s dual approach for measuring accuracy is designed to provide additional information to better target quality improvement efforts, but its methods and practices lack rigor and transparency, thereby undermining the usefulness and credibility of its measures. By not leveraging a statistician or otherwise following statistical practices in developing accuracy estimates, VBA is producing and relying on inaccurate estimates to make important internal management decisions. Similarly, by using a one-size sampling methodology, VBA is unnecessarily expending limited resources that could be used elsewhere. The systematic exclusion of redistributed claims and those moved between offices further calls into question the rigor of its accuracy estimates. Lastly, VBA’s reporting of its two accuracy metrics lacks sufficient transparency to help members of Congress and other stakeholders fully understand the differences and limitations of each, and thus may undermine their trust in VBA’s reported performance. VBA has enhanced and coordinated other aspects of its quality assurance framework, but shortcomings in implementation and evaluation detract from their overall effectiveness. For example, although VBA is disseminating the results of national STAR reviews and consistency studies, and local QRTs are using those results to focus related training or guidance to claims processing staff, until centralized guidance is consolidated and streamlined, staff lack ready access to information that will help them prevent errors. Moreover, absent adequate system capabilities to support local quality reviews, QRTs are unable to stop incorrect decisions from being finalized, and may not be aware of error trends that could be mitigated through training or other corrective action. Finally, although some of its quality assurance activities are relatively new, VBA lacks specific plans to evaluate their effectiveness and may miss opportunities to further improve or target these activities to more error-prone areas. In general, unless VBA takes steps to improve the rigor of all its quality assurance methods and practices, VBA may find progress toward achieving its goal of 98 percent accuracy in fiscal year 2015 illusive—especially in the face of challenging workloads, limited resources, and expectations of timely claim decisions. To help improve the quality of VBA’s disability compensation claim decisions, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Benefits to: Leverage appropriate expertise to help VBA do each of the following: weight its accuracy estimates to reflect the sample design for determine and report the confidence intervals associated with its reported accuracy estimates; and re-examine its approach to calculating the regional office sample size for STAR. Take steps to ensure that redistributed claims and those moved between regional offices are not underrepresented in the STAR sample. Increase transparency in explaining how the claim-based and issue- based accuracy rates are calculated as well as their key limitations when publicly reporting these metrics. Review the multiple sources of policy guidance VBA provides to determine ways to consolidate them or otherwise improve their availability and accessibility for use by staff in regional offices. Take steps to ensure that any future upgrades to local data systems allow QRTs to pause the claims process when errors are detected and enable QRTs to better track error trends. Take additional steps to evaluate the effectiveness of quality assurance activities to identify opportunities to improve or better target these activities. We provided a draft of this report to VA for review and comment, and its written comments are reproduced as appendix III in this report. VA generally agreed with our conclusions and concurred with all of our recommendations. The agency outlined how it plans to address our recommendations as follows: Regarding our recommendations to leverage appropriate expertise to improve its measurement and reporting of accuracy, VA stated that a VBA statistician has begun developing a revised sampling methodology that takes into consideration output and claims processing accuracy at each regional office to determine sample sizes. VBA also plans to appropriately weight accuracy estimates and calculate the margins of error based on the revised sampling methodology. VBA intends to report results based on this new methodology beginning in March 2015. Regarding our recommendation to take steps to ensure that redistributed claims and those moved between regional offices are not underrepresented in the STAR sample, VA stated that VBA’s revised sampling methodology will be based on the office completing the claim, and that no claims will be excluded from samples due to changes in jurisdiction. VBA intends to implement this revised sampling methodology by the end of March 2015. Regarding our recommendation to increase transparency in explaining how the claim-based and issue-based accuracy rates are calculated, VA stated that VBA will describe its sampling, assessment criteria, calculation, and reporting methodologies for claim and issue- level accuracy as part of future performance documents and public reports. VBA anticipates implementing this recommendation by the end of March 2015. Regarding our recommendation to review the multiple sources of policy guidance VBA provides to regional office staff, VA stated that in September 2014, VBA began improving the availability and accessibility of policy guidance, as well as consolidating references to this guidance. VBA anticipates completing this project by the end of April 2015. Regarding our recommendation to take steps to ensure that any future upgrades to local data systems allow QRTs to pause the claims process when errors are detected and enable QRTs to better track error trends, VA stated that VBA is designing a new database that will incorporate all types of quality reviews (i.e., regional office reviews, STAR, and consistency studies) and provide VBA with more data analysis capabilities. Although VA did not outline specific steps VBA plans to take to upgrade local data systems so that QRTs may pause the claims process, VBA plans to implement this recommendation by the end of June 2015. Regarding our recommendation to take additional steps to evaluate the effectiveness of quality assurance activities to identify opportunities to improve or better target these activities, VA stated that VBA’s new database will enable VBA to do so by the end of June 2015. VA also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Veterans Affairs. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives of this report were to examine (1) the extent to which the Veterans Benefits Administration (VBA) effectively measures and reports the accuracy of compensation claim decision-making, and (2) whether VBA’s other quality assurance activities are coordinated and effective. To assess VBA’s measurement and reporting of the accuracy of compensation claim decision-making, we focused on the STAR process for reviewing disability compensation claims that VBA identifies as rating- related—that is, requiring a decision on the claimant’s eligibility for benefits and the monthly benefit amount. We did not review quality assurance over disability compensation claims that did not involve a rating, including adjustments for additional dependents. We also did not review quality assurance efforts involving appealed cases, aspects of which fall under the Board of Veterans’ Appeals. Finally, we did not review pension claims, which represent a small portion of VBA’s disability benefits workload, because VBA is reviewing its approach to the accuracy assessment of pension claims. To determine the extent to which STAR appropriately reflects the accuracy of claims, we reviewed VBA policy manuals, the STAR checklist, and other tools used in VBA’s STAR review. We interviewed VBA and Office of Inspector General (OIG) officials to learn whether there are claim types that are omitted from STAR review and, if so, the reasons for these omissions. To determine how errors are identified and counted under STAR, we examined the ways in which the checklist and other STAR procedures are used to quantify errors. We visited VBA’s office in Nashville, Tennessee, where the STAR reviews are conducted to observe the review process and program methodology in action. We reviewed checklists used to assess accuracy of claims and identified information VBA uses on the basis of these checklists to calculate accuracy rates. To assess the extent to which VBA uses generally accepted statistical practices to generate accuracy rates, we analyzed VBA data on claims processed and reviewed from October 2012 through September 2013. In analyzing STAR data, we calculated the weighted claim-based annual accuracy rate for each regional office and nationwide. We then calculated the 95 percent confidence intervals associated with these estimated accuracy rates. We applied a statistical sample size formula suitable for use in a stratified random sample and analyzed the differences this approach produced compared to VBA’s sample size estimation methodology for regional offices. We assessed the reliability of VBA’s STAR data by performing electronic data testing, reviewing related documentation, and interviewing knowledgeable agency officials. We also assessed the reliability of VBA’s claim processing data by interviewing knowledgeable agency officials about the data. To electronically assess the reliability of the STAR data, we tested for duplicate benefit records, tested the claim disposition date field to ensure we only analyzed STAR claims from fiscal year 2013, checked the benefit claim end product code to ensure we only included benefit claims with end product codes eligible for inclusion in the STAR accuracy sample, checked for missing data in key analysis variables, and examined the range of values in key variables to check for outliers. We determined that the data were sufficiently reliable for our purposes. To assess how VBA reports accuracy, we identified and reviewed relevant VBA performance reports, such as VA’s Performance and Accountability Report and Aspire dashboard data. We also interviewed VBA officials about the rationale for creating the issue-based accuracy measure, and the agency’s plans for reporting its performance on accuracy and consistency. We compared VBA practices with legal requirements for agency performance reporting such as the GPRA Modernization Act of 2010 and related GAO work (e.g., GAO, Managing For Results: GPRA Modernization Act Implementation Provides Important Opportunities to Address Government Challenges, GAO-11-617T, Washington, D.C.: May 10, 2011). To determine whether VBA’s quality assurance activities are coordinated and effective, we reviewed VBA quality assurance policies, reports, and guidance to identify key quality assurance activities. Based on this review, we focused on quality review teams (QRT), which are located in each regional office and responsible for local quality assurance, as well as on VBA’s consistency program that is administered by VBA’s centralized quality assurance staff. We then examined each activity’s function and process by reviewing relevant guidance and policy documents and interviewing central office officials. Specifically: We reviewed VBA policy and procedure documents for quality review teams (QRT) to learn the purposes of, and the information generated by, these efforts. In addition, we interviewed VBA central office and regional office officials to gather their perspectives on any redundancy or gaps between quality assurance efforts. We compared the functions of and information yielded by quality assurance components with the framework laid out in VBA’s Quality Assurance Program Plan, as well as standards for internal control in the federal government (see GAO, Standards for Internal Control in The Federal Government, GAO/AIMD-00-21.3.1, Washington, D.C. : November 1999). In addition, we interviewed VBA regional office officials to learn about processes QRTs follow and how these procedures may vary across regional offices. We also reviewed and compared VBA criteria for QRT staff, STAR reviewer, and claims processor certification. We reviewed documents and interviewed VBA officials to learn more about the recent changes to the agency’s approach to assessing consistency. More specifically, we explored the rationale for the change from using inter-rater reliability (IRR) studies to using consistency questionnaires. We assessed the development and implementation of the recent consistency questionnaires by, for example, examining VBA’s consideration of pre-testing the instruments using generally accepted survey procedures, and how pre-testing may affect the resulting measures of consistency. Finally, to further determine how consistency questionnaires are complementary with other quality assurance efforts, we reviewed VBA’s process for determining topics for consistency questionnaires. Specifically, we asked about the methods used to select and prioritize topics, including the extent to which officials use findings from QRTs and STAR. To further determine what and how information is shared among quality assurance components and how this coordination helps to identify problem areas, we interviewed VBA regional office officials to gather their perspectives on how information is shared from STAR, QRT, consistency studies, and regional office compliance visits and how that information- sharing could be improved. We interviewed officials at the regional level to gain their perspectives on coordination and effectiveness of all of VBA’s quality assurance activities. At each office, we spoke with service center managers and quality assurance staff, as well as representatives of local veteran service organizations. The regional offices were selected to reflect a range of characteristics related to: (1) geography (at least one regional office in each of VA’s four areas), (2) number of claims processed annually, (3) claim-based accuracy rates, and (4) issue-based accuracy rates. We did not identify specific quality assurance pilots or initiatives being tested in regional offices. We selected 4 of VBA’s 57 regional offices for review. We visited the Oakland and Newark regional offices and conducted telephone interviews with Nashville and Waco regional office staff. Table 1 provides information about the regional offices we selected to visit. This appendix provides additional technical details on ratio estimation for producing issue-based accuracy rates, as well as the audit work we did to re-estimate the regional office Systematic Technical Accuracy Review (STAR) sample sizes using a formula for stratified random probability samples. Because STAR is designed to sample claims and produce an estimate of the claim-based accuracy rate and because the number of medical issues per claim varies, ratio estimation should be used to develop issue-based accuracy rates. Furthermore, during their review of sampled claims, STAR reviewers may find that one or more inferred issues were missed or, conversely, that the review process included one or more issues inappropriately. Thus, the STAR sample of claims must be used to estimate both the total number of issues as well as the number of issues that were processed correctly. With respect to STAR, ratio estimation takes the form shown below. In the formula, the subscript i represents the regional office, the subscript j represents the month of the fiscal year, jin , represents the monthly sample size for regional office i in month j, jiW , represents the stratum sampling weight for regional office i in month j, kjia , number of issues adjudicated correctly on claim k in month j and regional office i and represents the total number of issues on claim k in kjim , month j and regional office i. The ability to calculate a ratio estimate and its associated confidence interval are available in most statistical software applications. Each month the Veterans Benefits Administration (VBA) selects a random sample of benefit claims within each VA regional office to review under the STAR program. The measure of interest is the estimated percent of claims that were processed correctly by VBA regional office staff. The sample size formula used by VBA to derive the number of claims to select in each VBA regional office is shown below. In the formula, Z = the quantile from the Normal distribution for the desired level of confidence. The desired margin of sampling error is denoted by E. The assumed percent of accuracy in the population is denoted by P, and Q is defined as Q = (1 – P). For their calculations, VBA uses the following values: . When these values are plugged into the equation, n = 246. This is VBA’s target annual sample size for each VA regional office. With 57 regional offices, this translates into 14,022 claims selected nationally per fiscal year in the STAR sample. On a monthly basis, when divided by 12, 246/12 = 20.5 which rounds up to 21. Thus, VBA’s monthly sample size for each regional office is 21 claims. By definition, the sample frame for each month is the set of veteran benefit claims completed by the regional office within the previous month. The standard statistical formula for the sample size calculation with a stratified random sample is shown below. We applied this formula to determine an annual total sample size for a regional office in the coming fiscal year using observed monthly accuracy rates and monthly number of claims completed from the previous fiscal year. . In turn, this initial sample size is adjusted with the finite population correction factor. The formula for the adjusted sample size is shown below. In addition to the contact named above, Michele Grgich (Assistant Director), Dana Hopings (Analyst-In-Charge), Carl Barden, James Bennett, David Chrisinger, Alexander Galuten, Joel Green, Avani Locke, Vernette Shaw, Almeta Spencer, Walter Vance, and Greg Whitney made key contributions to this report. | With a backlog of disability compensation claims, VBA faces difficulties in improving the accuracy and consistency of the claim decisions made by staff in its 57 regional offices. To help achieve its goal of 98 percent accuracy by fiscal year 2015, VBA recently implemented a new way of measuring accuracy and changed several quality assurance activities to assess the accuracy and consistency of decisions and to provide feedback and training to claims processors. GAO was asked to examine VBA's quality assurance activities. This report evaluates (1) the extent to which VBA effectively measures and reports the accuracy of its disability compensation claim decisions and (2) whether VBA's other quality assurance activities are coordinated and effective. GAO analyzed VBA claims and STAR accuracy data from fiscal year 2013 (the most recent fiscal year for which complete data are available); reviewed relevant federal laws, VBA guidance, and other documents relevant to quality assurance activities; and interviewed VBA staff from headquarters and four VBA regional offices (selected to achieve variety in geography, workload, and accuracy rates), as well as veteran service organization officials. The Veterans Benefits Administration (VBA)—within the Department of Veterans Affairs—measures and reports the accuracy of its disability compensation claim decisions in two ways: (1) by claim and (2) by disabling condition, though its approach has limitations. When calculating accuracy rates for either measure through its Systematic Technical Accuracy Review (STAR), VBA does not always follow generally accepted statistical practices, resulting in imprecise performance information. For example, VBA does not adjust its accuracy estimates to reflect that it samples the same number of claims for review from each regional office—despite their varying workloads—and thus produces imprecise estimates of national and regional accuracy. Further, VBA reviews about 39 percent (over 5,000) more claims nationwide than is necessary to achieve its desired precision in reported accuracy rates, thereby diverting limited resources from other important quality assurance activities, such as targeted reviews of error-prone cases. In addition to issues with its statistical practices, VBA's process for selecting claims for STAR review creates an underrepresentation of claims that are moved between regional offices, which may inflate accuracy estimates because these claims have had historically lower accuracy rates. Finally, VBA has not clearly explained in public reports the differences in how its two accuracy measures are calculated or their associated limitations, as suggested by best practices for federal performance reporting. VBA has taken steps to enhance and coordinate its other quality assurance activities, but GAO found shortcomings in how VBA is implementing and evaluating these activities. To improve local accuracy, VBA created regional office quality review teams (QRTs) with staff dedicated primarily to performing local accuracy reviews. QRTs assess individual claims processor performance and conduct special reviews to forestall certain types of errors. In addition, VBA began using questionnaires for assessing decision-making consistency, which are more efficient to administer than VBA's prior approach to conducting consistency studies. VBA also coordinates quality assurance efforts by disseminating national accuracy and consistency results, trends, and related guidance to regional offices for use in training claims processors. Further, VBA uses STAR results to inform other quality assurance activities, such as focusing certain QRT reviews on commonly made errors. However, GAO identified implementation shortcomings that may detract from the effectiveness of VBA's quality assurance activities. For example, contrary to accepted practices for ensuring the clarity and validity of questionnaires, VBA did not pre-test its consistency questionnaires to ensure the clarity of questions or validity of the expected results, although VBA officials indicated that they plan to do so for future questionnaires. In contrast with federal internal control standards that call for capturing and distributing information in a form that allows people to efficiently perform their duties, staff in the four regional offices that we visited had trouble finding the guidance they needed to do their work, which could affect the accuracy as well as the speed with which staff decide claims. Federal standards also call for knowing the value of efforts such as quality assurance activities and monitoring their performance over time; however, VBA has not evaluated the effect of its special QRT reviews or certain consistency studies on improving targeted accuracy rates, and lacks clear plans to do so. GAO is making eight recommendations to VA to improve its measurement and reporting of accuracy, review the multiple sources of policy guidance available to claims processors, enhance local data systems, and evaluate the effectiveness of quality assurance activities. VA concurred with all of GAO's recommendations. |
Under the national military strategy, the military services are required to maintain enough ammunition for wartime needs and for peacetime needs, such as training. The Defense Planning Guidance lays out general guidelines for the services to determine how much ammunition they need to conduct operations under the strategy. Ammunition that exceeds these requirements is to be shared among the services or disposed of through sale to other nations, recycling, or demilitarization. In 1977, the Army, through its Operations Support Command (formerly Industrial Operations Command), assumed single manager responsibility for managing, storing, and disposing of the services’ ammunition. The Command’s Defense Ammunition Center provides the Command and the military services a variety of ammunition related services, including training, technical assistance, and logistics support. The Army demilitarizes excess ammunition at its ammunition depots, plants, and centers. The Army has used open burning and detonating processes as well as the more environmentally friendly processes to demilitarize excess ammunition. Open burning and detonating processes, which may release airborne gases, particles, and other contaminants that are carried downwind of the demilitarization sites, have been the topic of public concerns regarding possible health risks to civilian populations. Environmentally friendly processes use demilitarization technologies that do not release contaminants into the atmosphere. The government-owned locations that demilitarize excess ammunition using environmentally friendly processes are shown in figure 1. During the 1980s, the amount of excess ammunition needing to be demilitarized was generally stable, holding at about 100,000 tons. However, in the early 1990s, with the end of the Cold War and other worldwide changes, a general reshaping of military resources and budgets began as the United States shifted from a strategy of preparing for a global war to a strategy of preparing for regional conflicts and crises. As a consequence, the services’ ammunition requirements were significantly reduced, and by 1993 the Operation Support Command’s reported backlog of ammunition awaiting demilitarization was 354,000 tons. Because excess and needed ammunition were being stored together, the Command was concerned that the excess ammunition could impede access to needed ammunition and hinder the Command’s ability to effectively support contingency operations. To address this concern, Congress increased the amount of funding available for ammunition demilitarization from $35 million in fiscal year 1993 to almost $71 million in fiscal year 1994 and to an average of nearly $92 million annually in fiscal years 1995-2000. In addition, the Command set a goal of reducing the backlog to 100,000 tons by 2004. In October 1998, the Army extended its goal to reduce the demilitarization stockpile to less than 100,000 tons in fiscal year 2004 to the end of fiscal year 2010. On May 10, 1993, the Chairman of the Senate Appropriations Subcommittee on Defense requested that DOD increase its use of environmentally safe destruction processes and technologies and phase out its use of open burning and detonating destruction processes as soon as possible. The Chairman also requested that DOD look to the private sector for environmentally friendly processes that could be used to help demilitarize excess ammunition. In 1994, the Senate Appropriations Committee directed the Army to accelerate, where possible, the award of contracts that make use of environmentally friendly demilitarization processes. The Operations Support Command enacted a variety of initiatives to help the demilitarization program respond to the congressional requests. These initiatives included optimizing work assigned to government facilities; increasing the use of environmentally friendly technology at government facilities to recover, recycle, and reclaim usable elements of ammunition; and awarding contracts to commercial firms that used environmentally friendly processes to demilitarize portions of the stockpile. DOD’s reported stockpile of excess ammunition has grown, and it does not include all excess ammunition; as a result, the government’s financial liability for demilitarizing excess ammunition is understated. To reduce the stockpile, the Operations Support Command enacted a variety of initiatives, and for fiscal years 1993 through 2000, it demilitarized 745,000 tons of excess ammunition from the stockpile. Despite these efforts, the reported stockpile grew from 354,000 tons in 1993 to 493,000 tons at the end of 2000 and is projected to be at 403,000 tons in 2004 (see fig. 2). According to the Operations Support Command, there are multiple factors that affect the number of tons in the reported stockpile from year to year. These factors include transferring ammunition from the stockpile to meet critical needs of the military services, the amount of demilitarization funding received from Congress, and the amount of excess ammunition that gets turned in to the stockpile. For example, the increase in the stockpile in fiscal year 1999 was largely due to the 289,000 tons entering the stockpile that year. According to the Command, the downward trend for fiscal years 2001 through 2004 is due to a combination of forecasted increases in demilitarization funding and forecasted decreases in quantities of ammunition becoming excess. Several factors outside the Command’s control contributed to the growth of the stockpile: downsizing of forces, which resulted in the need for less ammunition; replacing weapon delivery systems, which created obsolete ammunition; replacing older ammunition with newer, better versions, which created obsolete ammunition; transferring certain ammunition that was not planned for the stockpile (such as non-self-destruct antipersonnel land mines) to the stockpile; and reducing reliance on open burning and detonating processes to demilitarize ammunition in conjunction with public pressure to use more environmentally friendly methods. The Operations Support Command recognized that these factors would prevent it from meeting its goal of reducing the stockpile to 100,000 tons by 2004. Current Command projections show that the stockpile will instead be at about 403,000 tons by 2004. In October 1998, the Army extended its goal to reduce the demilitarization stockpile to less than 100,000 tons in fiscal year 2004 to the end of fiscal year 2010. In addition, the Operations Support Command’s reported stockpile does not include all excess ammunition needing demilitarization. The reported stockpile only includes excess ammunition located at storage sites belonging to the Command (see fig. 1). Our analysis of the services’ inventory records showed that there are additional quantities of excess ammunition needing demilitarization that were not included in the demilitarization stockpile. Specifically, we identified additional demilitarization liabilities associated with 94,030 tons of ammunition located overseas and 54,770 tons of unusable or unneeded ammunition at other military storage sites in the United States. Army Materiel Command officials explained that, in managing the demilitarization program, the Army estimates what ammunition is expected to require demilitarization in a reasonable time. Therefore, to plan and budget, it uses the quantities in the reported demilitarization stockpile plus forecasts of excess ammunition it expects the services to turn in to the stockpile. The officials agreed that the services’ inventory records showed additional quantities of excess ammunition needing demilitarization that were not included in the demilitarization stockpile and estimated that if all known and forecasted excess ammunition were recognized, the demilitarization liability for the Army could be as much as 2.9 million tons. The Command estimates the cost to demilitarize a ton of ammunition to be about $1,034. Using this estimate, the disposal liability could potentially be as great as $3 billion, but DOD’s financial statement does not reflect any demilitarization liability even though federal financial accounting standards require recognition and reporting of liabilities associated with disposal. DOD’s omission of its demilitarization liability is representative of the needed financial management reforms on which we testified before the Government Management, Information, and Technology Subcommittee of the House Committee on Government Reform, stating that DOD still faces significant challenges to implement the federal accounting standards requiring recognition and reporting of liabilities associated with disposal. In recent years, the Operations Support Command has worked to allocate 50 percent of its excess ammunition demilitarization budget to contractors that used environmentally friendly demilitarization processes. However, at the same time the Command retained and underutilized environmentally friendly demilitarization capabilities at government facilities. The Army could have benefited from examining whether it was maximizing its demilitarization capabilities with the most cost-effective mix of public and private environmentally friendly capabilities. We noted that in some instances the Army incurred additional costs in contracting with the private sector for ammunition demilitarization and retained underutilized environmentally friendly demilitarization processes at its facilities. From 1993 to 1996 the Operations Support Command awarded 18 demilitarization contracts to private firms to demilitarize 76,527 tons of ammunition at a cost of about $48.2 million. During this 4-year period, the private sector received about 16 percent of the Command’s demilitarization budget. Although congressional instructions did not specify how much demilitarization work should go to the private sector, in February 1996, the Army Materiel Command required that the demilitarization budget for 1997 be split 50/50 between government facilities and private companies. Army Materiel Command officials said the directive was issued to force the Operations Support Command to move a larger portion of its demilitarization workload to private firms and that the 50/50 split seemed appropriate (even though the government facilities having environmentally friendly processes were being underused at the time). The Operations Support Command adopted this policy for fiscal year 1997 and subsequent years. While the actual ratio varied each year, over time the Command planned to spend its ammunition demilitarization funds equally between government facilities and private firms. For fiscal years 1997 and 1998, the Command awarded 21 contracts to private companies to demilitarize 56,739 tons of ammunition at a cost of about $45.8 million. During this 2-year period, the private sector received about 25 percent of the Command’s demilitarization budget. To eliminate the administrative burden associated with awarding and monitoring 21 contracts, beginning in fiscal year 2000 the Operations Support Command awarded two 5-year contracts, potentially worth an estimated total of $300 million, to General Dynamics Armament Systems and PB/Nammo Demil LLC. Subsequently, General Dynamics Armament Systems was awarded a task order under the contract to demilitarize 12,000 tons of ammunition at a price of $34.8 million for the first year and PB/Nammo Demil LLC was awarded a task order under the contract to demilitarize 12,000 tons of ammunition at a price of $25.9 million for the first year. PB/Nammo Demil LLC entered into agreements with three government facilities for a portion of this work. In addition, the firm subcontracted with other companies in the United States and overseas for the remainder of the work. According to Army Materiel Command and Operations Support Command officials, when implementing congressional direction to involve the private sector in environmentally friendly demilitarization of excess ammunition, the Army did not emphasize cost-effectiveness in terms of dollars saved and costs avoided. As a result, the Army incurred additional costs in contracting with the private sector for ammunition demilitarization. For example, according to the contracts, the Command is required to pay for packaging, crating, handling, and transportation costs to move ammunition from a government facility to the contractor demilitarization site. The Command considers these costs necessary to doing business with contractors. Since 1997 the Operations Support Command paid from $8 million to $14 million a year for packaging, crating, and handling excess ammunition and for transporting the ammunition, mostly from government facilities to contractor sites for demilitarization using environmentally friendly demilitarization processes. According to Command officials, a small percentage was spent to move excess ammunition from one government facility to another, but the majority of these expenditures were for moving ammunition from government sites to contractor sites. In some cases, government facilities with excess ammunition in storage had environmentally friendly demilitarization processes and facilities that could have been used to demilitarize the ammunition without incurring the shipping cost, leaving the funds available to demilitarize additional ammunition. For example, at one facility we visited, the Command paid $50,000 during fiscal year 2000 to ship excess ammunition from a storage site at the McAlester Army Ammunition Plant to contractor demilitarization sites when the McAlester plant had environmentally friendly capabilities to demilitarize the ammunition. The Command could have avoided $50,000 in shipping costs by allocating this work to McAlester. Other costs were incurred under the Operations Support Command’s two contracts awarded in May 1999 that could have been avoided had the work been assigned to a government facility. For example, in one instance where the Command contracted for ammunition demilitarization, the contractor, in turn, entered into agreements with three government facilities to have them perform the demilitarization work. In essence, the government paid a contractor to have the ammunition demilitarized by government employees. This occurred when the contractor entered into three separate agreements for demilitarization services with government facilities at McAlester, Oklahoma; Crane, Indiana; and Tooele, Utah. The total value of the agreements for the first year was $8.6 million (including about $1.9 million to upgrade the demilitarization capabilities at the three government facilities). In addition, information provided by the contractor and by one government facility indicates that one government facility could have demilitarized the ammunition for less cost than was incurred by the Command’s contract with this firm. The Operations Support Command attributed the decision not to use the available environmentally friendly capacity at government facilities for demilitarization purposes to the Army Materiel Command’s interpretation of congressional instructions to use the private sector to destroy excess ammunition and the Materiel Command’s mandate that 50 percent of the demilitarization budget go to private firms. While increasing reliance on contracted demilitarization, the Operations Support Command has retained environmentally friendly processes that are not being fully utilized. Projections for fiscal year 2001 show that 16,550 tons of incineration capacity at four government facilities will not be used. These same projections show that government facilities will operate at only 20 percent of their overall capacity to recover and reuse 81,100 tons of excess ammunition (see table 1). Currently, the Army is conducting a congressionally mandated study of potential alternative disposal methods that do not release contaminants into the atmosphere. The study will address the possibility of phasing out open burning and detonating processes in favor of environmentally friendly processes, technologies currently in existence and under development, and the cost and feasibility of constructing facilities employing these technologies. According to Operations Support Command officials, the results of this study, which will not be available until September 2001, could potentially lead to expanding the government’s environmentally friendly capabilities. DOD’s conventional ammunition policies and procedures require the military services to routinely check excess ammunition awaiting demilitarization before purchasing new ammunition. Available information indicates that the stockpile may contain ammunition that may be usable for training purposes, but more analysis is required to evaluate the condition of the ammunition. Although neither the services nor the Operations Support Command systematically compares the contents of the excess ammunition stockpile to the training needs of the active and reserve forces, the Command checks the stockpile for such items if a critical shortage occurs or if the needed ammunition cannot be purchased. For example, in the last 2 years quantities of 155-millimeter, 105-millimeter, and 30-millimeter ammunition have been pulled from the stockpile and given to the active forces. The Department of Defense Single Manager of Conventional Ammunition (Implementing Joint Conventional Ammunition Policies and Procedures) 5160.65-M requires the military services to routinely check all alternative sources before purchasing ammunition for its weapon systems. Excess ammunition awaiting demilitarization in the stockpile is an alternative source. However, the Command believes that a routine comparison of planned purchases to the stockpile is unnecessary because (1) when the excess ammunition has been offered to these groups before it was placed in the stockpile, they declined it, and (2) it would have to spend money to conduct an evaluation of the condition of the excess ammunition. Also, a Command official responsible for managing the stockpile stated that a 1996 Army analysis of the excess ammunition in the stockpile found that there were no items in the stockpile that could be used for training. According to a Defense Ammunition Center official, the services’ needs may change over time and usable excess ammunition potentially could be recalled from the stockpile to prevent concurrent procurement and demilitarization. Our analysis showed that the Army has recently purchased 10 types of ammunition, particularly small caliber ammunition, when quantities of the same items were also in the stockpile and identified in the Army’s records as being of sufficient quality (either new or in like-new condition) for training purposes. Examples of excess ammunition that the Army purchased in fiscal year 2000 for training exercises at the same time there were quantities in the stockpile reported to be in usable condition are shown in table 2. A disposal liability of potentially up to $3 billion is not reflected in DOD’s financial statements. If all excess ammunition is not accurately reflected in DOD’s financial statements and made available for congressional budget deliberations, then DOD and Congress cannot clearly understand the present and future financial liability associated with demilitarizing the excess ammunition. Additionally, indications are that the allocation of 50 percent of the excess ammunition demilitarization budget to contractors may have increased the cost of demilitarizing excess ammunition. Also, excess capacity involving environmentally friendly demilitarization processes exists at government facilities. While it may be appropriate to rely on the private sector to enhance demilitarization capabilities, the continued use of the private sector to demilitarize excess ammunition at the same time the government facilities have excess capacity raises the question of whether the Army is sponsoring too much capacity. At the same time, an on-going study effort examining the potential to expand environmentally friendly demilitarization capabilities at government facilities raises additional questions about the appropriate mix of public/private sector capacity needed to demilitarize excess ammunition. Whether excess ammunition in the demilitarization stockpile could be used for training needs is unclear because the Command does not systematically compare the contents of its stockpile to the training needs of the active and reserve forces. DOD requires such a comparison before purchasing ammunition. Records indicate that the Army is buying ammunition when potentially usable ammunition is available in the stockpile, suggesting that checking the stockpile could be cost-effective by avoiding concurrent procurement and demilitarization and could put the Army in a better position of buying what it actually needs. To improve the financial reporting, economy, and efficiency of demilitarizing excess ammunition, we recommend that the Secretary of Defense require the Secretary of the Army to 1. identify and include the total liability (domestic and overseas) associated with demilitarizing excess ammunition in the Department’s annual consolidated balance sheet; 2. develop a plan in consultation with Congress that includes procedures for assessing the appropriate mix of public/private sector capacity needed to demilitarize excess ammunition and the cost-effectiveness of using contractors versus government facilities to demilitarize excess ammunition, with specific actions identified for addressing the capacity issue; and 3. comply with DOD’s policy to routinely compare planned purchases of ammunition for training with usable ammunition in the stockpile and require the single manager for conventional ammunition to prepare periodic reports to the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, documenting such comparisons and showing the quantities and types of ammunition reclaimed from the stockpile. The Director of Strategic and Tactical Systems in the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics provided written comments to a draft of this report. DOD’s comments are reprinted in appendix II. DOD also provided separate technical comments that we have incorporated in this report where appropriate. DOD generally agreed with our recommendations and pointed out that it is taking actions that it believes will address our recommendations. However, additional actions will likely be needed to fully address the recommendations. In commenting on our recommendation for dealing with the liability associated with demilitarizing excess ammunition, DOD stated that determining an accurate demilitarization liability is a difficult task and that it believes that a reasonable estimate of the demilitarization stockpile plus the forecast of new generations expected to be added to the stockpile for the next 5 years should meet the intent of our recommendation. However, this proposal does not recognize a liability for excess ammunition overseas (even though a portion of the demilitarization budget each year is used to demilitarize ammunition overseas) nor does the proposal recognize any demilitarization liability for excess Army-owned war reserve ammunition, excess retail ammunition, and excess ammunition not stored at an Army installation. Therefore, we believe DOD should recognize the total liability associated with demilitarizing excess ammunition rather than its proposal to recognize only a portion of its demilitarization liability and have revised our recommendation accordingly. In commenting on our recommendation for a plan and procedures for assessing the public/private sector mix of demilitarization capacity, DOD stated that the Army is preparing a report to Congress, due September 30, 2001, on the feasibility of replacing open burning and detonation with closed disposal technologies. DOD said that this report could also be used to address the mix of public/private sector capacity needed to demilitarize excess ammunition. DOD also stated that the Army has a computer-modeling tool that can be used to identify the costs associated with changing the public/private sector percentages. We recognize that the report and computer-modeling tool can provide information that the Army can use to determine the mix of public/private sector capacity needed to demilitarize excess ammunition, but DOD’s response does not address the substance of our recommendation which is to state how it plans to rationalize the public/private mix and minimize excess capacity at government facilities. Accordingly, we have made no change to our recommendation. Our draft report included a recommendation that DOD determine the feasibility of establishing a process to periodically compare planned purchases of ammunition for training with usable ammunition in the stockpile. DOD stated that an existing regulation and procedures require the Army to screen excess ammunition for use prior to procurement. However, our work showed that the Operations Support Command checks the stockpile for ammunition only if a critical shortage occurs or if the needed ammunition cannot be purchased. This suggests the need for additional oversight to ensure such assessments occur on a more frequent basis. Therefore, we have revised our recommendation to require the Army to comply with DOD’s policy to routinely compare planned purchases of ammunition for training with usable ammunition in the stockpile and to require the single manager for conventional ammunition to prepare periodic reports documenting such analyses and showing the quantities and types of ammunition reclaimed from the stockpile. We are sending copies of this report to the appropriate congressional committees; the Honorable Donald H. Rumsfield, Secretary of Defense; the Acting Secretary of the Army, Joseph W. Westphal; the Acting Secretary of the Navy, Robert B. Pirie, Jr.; the Acting Secretary of the Air Force, Lawrence J. Delaney; and the Director, Office of Management and Budget Mitchell E. Daniels, Jr. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. To determine the extent to which the excess ammunition stockpile has been reduced and whether the liability associated with excess ammunition has been fully identified, we reviewed the composition of the Army’s reported stockpile of excess ammunition and obtained inventory records showing the condition and location of the services’ ammunition. We also reviewed policies and procedures governing the demilitarization of excess ammunition and the requirements for reporting the financial liability of ammunition awaiting demilitarization. We met with officials and performed work at the U.S. Army Operations Support Command, Rock Island Arsenal, Rock Island, Illinois; the U.S. Army Defense Ammunition Center, McAlester, Oklahoma; Army, Navy, Marine Corps, and Air Force Headquarters, Washington, D.C.; and the Office of the Under Secretary of Defense (Acquisition and Technology), Washington, D.C. To assess the extent that the Army used contractors to demilitarize excess ammunition and its impact on the utilization of environmentally friendly demilitarization processes at government facilities, we met with officials at the Operations Support Command; McAlester Army Ammunition Plant, McAlester, Oklahoma; and PB Nammo Demil LLC, New York, N.Y. We selected the McAlester plant because it was one of three government facilities having an agreement with PB Nammo Demil LLC to perform demilitarization work. We reviewed the Command’s contracts with private firms and assessed packaging, crating, and handling expenses associated with transporting ammunition to contractor sites. We also obtained and reviewed contractor agreements with government facilities to have them perform the demilitarization work and evaluated information provided by the contractor and by one government facility to determine if the government facility could have demilitarized the ammunition for less cost than was incurred by the Command’s contract with this firm. We obtained Army data on the government facilities’ capabilities to demilitarize excess ammunition and compared the Army’s demilitarization plans to these capabilities. This allowed us to identify and calculate excess capacity situations. We also obtained information from the Army Materiel Command and the Operations Support Command involving an on-going study of the possibility of phasing out open burning and detonating processes in favor of environmentally friendly processes, technologies currently in existence and under development, and the cost and feasibility of constructing facilities employing these technologies. To determine the feasibility of using excess ammunition for training needs, we met with officials at the U.S. Defense Ammunition Center and discussed the Center’s capability to compare the contents of the excess ammunition stockpile to the services’ needs for ammunition to perform training operations. We compared the services’ fiscal year 2000 training ammunition purchases to ammunition awaiting disposal to verify that ammunition matching the services’ training needs is located in the stockpile. We did not look at opportunities to dispose of excess ammunition in the stockpile through sale to other nations. We used the same computer programs, reports, records, and statistics that DOD and the military services had used to manage excess ammunition. For example, we used Operations Support Command’s inventory records to show the reported amounts of excess ammunition in the stockpile. We did not independently determine the reliability of all these sources. For historical perspective and illustrations of past problems, we reviewed the results of prior Defense studies and audit reports. We performed our review from August 2000 through February 2001 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated March 26, 2001. 1. DOD’s comment and our evaluation are included in the body of the report. 2. The examples of costs that could have been avoided that we cite in our report relate to contracts awarded for fiscal year 2000. The examples illustrate the need for DOD to develop a plan that includes procedures for assessing the appropriate mix of public/private sector capacity by considering the cost-effectiveness of using contractors versus government facilities to demilitarize excess ammunition. Such a plan would help better ensure that cost-effective decisions are made. Our report also recognized that factors beyond the Army’s control have affected its efforts to demilitarize excess ammunition. Further, our recommendation states that the plan should be developed in consultation with Congress. 3. Our analysis suggests the Army has excess environmentally friendly demilitarization capacity considering the capacity available at government facilities and under contract. This suggests the need to rationalize the capacity being supported by DOD. 4. The example cited by DOD illustrates the need for it to examine why the Army continues to incur costs to maintain 24,000 tons of capacity at this site with only 2,200 tons of ammunition available on site to be demilitarized. 5. Our analysis indicates that in recent years DOD’s funding plan for ammunition demilitarization has significantly exceeded its funding level. The intent behind the plan called for in our recommendation is not to arbitrarily restrict use of the private sector percentages. Rather, we believe there is a need for DOD to develop a plan and business case analysis of the appropriate mix of public/private sector capacity by considering the cost-effectiveness of using contractors versus government facilities to demilitarize excess ammunition. 6. Our report focuses on excess capacity involving environmentally friendly demilitarization process that exists at government facilities and highlights that the Army has not determined the most cost-effective mix of public/private sector capacity for environmentally friendly demilitarization methods. Our report recognizes DOD’s efforts to decrease emphasis on open burning and detonating methods. 7. Our report does not state that there is no apparent benefit to using private industry. Our report stresses the need for a greater emphasis on cost effectiveness in deciding the appropriate mix of public and private environmentally friendly capabilities instead of assigning a predetermined amount of demilitarization funds to the private sector as the Army presently does. 8. The applicable section of the report was modified to include DOD’s position that its regulations and procedures require the Army to screen excess ammunition for use prior to procurement. 9. Our analysis was based upon data from the Army’s Defense Ammunition Center, which we shared with the Army during the course of our review. Our report emphasized that potentially usable ammunition was available in the stockpile and recognized that further analysis was needed to determine the usability of the excess ammunition. In addition to those named above, Jimmy Palmer, Joanna McFarland, and John Brosnan made key contributions to this report. | This report reviews the Department of Defense's (DOD) management practices for demilitarizing excess ammunition. Specifically, GAO evaluates (1) the extent to which the excess ammunition stockpile has been reduced and whether the liability associated with excess ammunition has been fully identified, (2) the Army's reliance on contracted demilitarization and the impact of doing so on government facilities that use similar environmentally friendly processes, and (3) the feasibility of using excess ammunition for U.S. training needs. GAO found that DOD's reported stockpile of excess ammunition has grown rather than decreased, rising from 354,000 tons in 1993 to 493,000 tons at the end of 2000. In addition, the reported stockpile does not include all excess ammunition, which understates DOD's ultimate liability for demilitarizing ammunition. In recent years, the Army has devoted 50 percent of its excess ammunition demilitarization budget to contractors that use environmentally friendly demilitarization processes. Although a congressional directive resulted in greater emphasis on contractor demilitarization, the Army began and later expanded this effort without considering the effect it would have on government facilities. With increased contractor demilitarization, the Army has retained and underutilized environmentally friendly demilitarization capabilities in government facilities. Finally, some excess ammunition potentially could be used to meet training needs, but further analysis by the Army is needed to fully evaluate the potential. |
DOE is responsible for a diverse set of missions, including nuclear security, energy research, and environmental cleanup. These missions are managed by various organizations within DOE and largely carried out by management and operating (M&O) contractors at DOE sites. According to federal budget data, NNSA is one of the largest organizations in DOE, overseeing nuclear weapons and nonproliferation- related missions at its sites. With a $10.5 billion budget in fiscal year 2011—nearly 40 percent of DOE’s total budget—NNSA is responsible for providing the United States with safe, secure, and reliable nuclear weapons in the absence of underground nuclear testing and maintaining core competencies in nuclear weapons science, technology, and engineering. Under DOE’s long-standing model of having unique M&O contractors at each site, management of its sites has historically been decentralized and, thus, fragmented. Since the Manhattan Project produced the first atomic bomb during World War II, NNSA, DOE, and predecessor agencies have depended on the expertise of private firms, universities, and others to carry out research and development work and efficiently operate the facilities necessary for the nation’s nuclear defense. DOE’s relationship with these entities has been formalized over the years through its M&O contracts—agreements that give DOE’s contractors unique responsibility to carry out major portions of DOE’s missions and apply their scientific, technical, and management expertise. Currently, DOE spends 90 percent of its annual budget on M&O contracts, making it the largest non-Department of Defense contracting agency in the government. The contractors at DOE’s NNSA sites have operated under DOE’s direction and oversight but largely independently of one another. Various headquarters and field-based organizations within DOE and NNSA develop policies and NNSA site offices, collocated with NNSA’s sites, conduct day-to-day oversight of the M&O contractors, and evaluate the contractors’ performance in carrying out the sites’ missions. As we have reported since 1999, NNSA has not had reliable enterprise- wide budget and cost data, which potentially increases risk to NNSA’s programs. Specifically: In July 2003 and January 2007, we reported that NNSA lacked a planning and budgeting process that adequately validated contractor- prepared cost estimates used in developing annual budget requests. Establishing this process was required by the statute that created NNSA—Title 32 of the National Defense Authorization Act for Fiscal Year 2000. In particular, NNSA had not established an independent analysis unit to review program budget proposals, confirm cost estimates, and analyze budget alternatives. At the request of the Subcommittee on Energy and Water Development, Senate Committee on Appropriations, we are currently reviewing NNSA’s planning and budgeting process, the extent to which NNSA has established criteria for evaluating resource trade-offs, and challenges NNSA has faced in validating its budget submissions. We expect to issue a report on this work later this year. In June 2010, we reported that NNSA could not identify the total costs to operate and maintain essential weapons activities’ facilities and infrastructure. Furthermore, we found that contractor-reported costs to execute the scope of work associated with operating and maintaining these facilities and infrastructure likely significantly exceeded the budget for this program that NNSA justified to Congress. We reported in February 2011 that NNSA lacked complete data on (1) the condition and value of its existing infrastructure, (2) cost estimates and completion dates for planned capital improvement projects, (3) shared-use facilities within the nuclear security enterprise, and (4) critical human capital skills in its M&O contractor workforce that are needed to maintain the Stockpile Stewardship Program. As a result, NNSA does not have a sound basis for making decisions on how to most effectively manage its portfolio of projects and other programs and will lack information that could help justify future budget requests or target cost savings opportunities. uncertainty over future federal budgets.to compare or quantify total savings across sites because guidance for estimating savings is unclear and the methods used to estimate savings vary between sites. We found that it was difficult The administration plans to request $88 billion from Congress over the next decade to modernize the nuclear security enterprise and ensure that base scientific, technical, and engineering capabilities are sufficiently supported and the nuclear deterrent can continue to be safe, secure, and reliable. To adequately justify future presidential budget requests, NNSA must accurately identify these base capabilities and determine their costs. Without this information, NNSA risks being unable to identify return on its investment or opportunities for cost savings or to make fully informed decisions on trade-offs in a resource-constrained environment. NNSA, recognizing that its ability to make informed enterprise-wide decisions is hampered by the lack of comprehensive data and analytical tools, is considering the use of computer models—quantitative tools that couple data from each site with the functions of the enterprise—to integrate and analyze data to create an interconnected view of the enterprise, which may help to address some of the critical shortcomings we identified. In July 2009, NNSA tasked its M&O contractors to form an enterprise modeling consortium. NNSA stated that the consortium is responsible for leading efforts to acquire and maintain enterprise data, enhance stakeholder confidence, integrate modeling capabilities, and fill in any gaps that are identified. The consortium has identified areas in which enterprise modeling projects could provide NNSA with reliable data and modeling capabilities, including capabilities on infrastructure and critical skills needs. In addition, we recently observed progress on NNSA’s development of an Enterprise Program Analysis Tool that should give NNSA greater insight into its sites’ cost reporting. The Tool also includes a mechanism to identify when resource trade-off decisions must be made, for example, when contractor-developed estimates for program requirements exceed the budget targets provided by NNSA for those programs. A tool such as this one could help NNSA obtain the basic data it needs to make informed management decisions, determine return on investment, and identify opportunities for cost saving. A basic tenet of effective management is the ability to complete projects on time and within budget. However, for more than a decade and in numerous reports, we have found that NNSA has continued to experience significant cost and schedule overruns on its major projects, principally because of ineffective oversight and poor contractor management. Specifically: In August 2000, we found that poor management and oversight of the National Ignition Facility construction project at Lawrence Livermore National Laboratory had increased the facility’s cost by $1 billion and delayed its scheduled completion date by 6 years. Among the many causes for the cost overruns or schedule delays, DOE and Livermore officials responsible for managing or overseeing the facility’s construction did not plan for the technically complex assembly and installation of the facility’s 192 laser beams. They also did not use independent review committees effectively to help identify and correct issues before they turned into costly problems. Similarly, in April 2010, we reported that weak management by DOE and NNSA had allowed the cost, schedule, and scope of ignition-related activities at the National Ignition Facility to increase substantially., Since 2005, ignition-related costs have increased by around 25 percent—from $1.6 billion to over $2 billion—and the planned completion date for these activities has slipped from the end of fiscal year 2011 to the end of fiscal year 2012 or beyond. We have issued several reports on the technical issues, cost increases, and schedule delays associated with NNSA’s efforts to extend, through refurbishment, the operational lives of nuclear weapons in the stockpile. For example, in December 2000, we reported that refurbishment of the W87 strategic warhead had experienced significant design and production problems that increased its refurbishment costs by over $300 million and caused schedule delays of about 2 years. Similarly, in March 2009 we reported that NNSA and the Department of Defense had not effectively managed cost, schedule, and technical risks for the B61 nuclear bomb and the W76 nuclear warhead refurbishments. For the B61 life extension program, NNSA was only able to stay on schedule by significantly reducing the number of weapons undergoing refurbishment and abandoning some refurbishment objectives. In the case of the W76 nuclear warhead, NNSA experienced a 1-year delay and an unexpected cost increase of nearly $70 million as a result of its ineffective management of one the highest risks of the program— the manufacture of a key material known as Fogbank, which NNSA did not have the knowledge, expertise, or facilities to manufacture. In October 2009, we reported on shortcomings in NNSA’s oversight of the planned relocation of its Kansas City Plant to a new, more modern facility. Rather than construct a new facility itself, NNSA chose to have a private developer build it. NNSA would then lease the building through the General Services Administration for a period of 20 years. However, when choosing to lease rather than construct a new facility itself, NNSA allowed the Kansas City Plant to limit its cost analysis to a 20-year life cycle that has no relationship with known requirements of the nuclear weapons stockpile or the useful life of a production facility that is properly maintained. As a result, NNSA’s financing decisions were not as fully informed and transparent as they could have been. If the Kansas City Plant had quantified potential cost savings to be realized over the longer useful life of the facility, NNSA may have made a different decision as to whether to lease or construct a new facility itself. We reported in March 2010 that NNSA’s plutonium disposition program was behind schedule in establishing a capability to produce the plutonium feedstock necessary to operate its Mixed-oxide Fuel Fabrication facility currently being constructed at DOE’s Savannah River Site in South Carolina. In addition, NNSA had not sufficiently assessed alternatives to producing plutonium feedstock and had only identified one potential customer for the mixed-oxide fuel the facility would produce. In its fiscal year 2012 budget justification to Congress, NNSA reported that it did not have a construction cost baseline for the facility needed to produce the plutonium feedstock for the mixed-oxide fuel, although Congress had already appropriated over $270 million through fiscal year 2009 and additional appropriation requests totaling almost $2 billion were planned through fiscal year 2016. NNSA stated in its budget justification that it is currently considering options for producing necessary plutonium feedstock without constructing a new facility. GAO, Nuclear Weapons: National Nuclear Security Administration’s Plans for Its Uranium Processing Facility Should Better Reflect Funding Estimates and Technology Readiness, GAO-11-103 (Washington, D.C.: Nov. 19, 2010). Senate Committee on Appropriations. We plan to issue our report next month. As discussed above, NNSA remains on our high-risk list and remains vulnerable to fraud, waste, abuse, and mismanagement. DOE has recently taken a number of actions to improve management of major projects, including those overseen by NNSA. For example, DOE has updated program and project management policies and guidance in an effort to improve the reliability of project cost estimates, better assess project risks, and better ensure project reviews that are timely, useful and identify problems early. However, DOE needs to ensure that NNSA has the capacity—that is, the people and other resources—to resolve its project management difficulties and that it has a program to monitor and independently validate the effectiveness and sustainability of its corrective measures. This is particularly important as NNSA embarks on its long- term, multibillion dollar effort to modernize the nuclear security enterprise. Another underlying reason for the creation of NNSA was a series of security issues at the national laboratories. Work carried out at NNSA’s sites may involve plutonium and highly enriched uranium, which are extremely hazardous. For example, exposure to small quantities of plutonium is dangerous to human health, so that even inhaling a few micrograms creates a long-term risk of lung, liver, and bone cancer and inhaling larger doses can cause immediate lung injuries and death. Also, if not safely contained and managed, plutonium can be unstable and spontaneously ignite under certain conditions. NNSA’s sites also conduct a wide range of other activities, including construction and routine maintenance and operation of equipment and facilities that also run the risk of accidents, such as those involving heavy machinery or electrical mishaps. The consequences of such accidents could be less severe than those involving nuclear materials, but they could also lead to long-term illnesses, injuries, or even deaths among workers or the public. Plutonium and highly enriched uranium must also be stored under extremely high security to protect it from theft or terrorist attack. In numerous reports, we have expressed concerns about NNSA’s oversight of safety and security across the nuclear security enterprise. With regard to nuclear and worker safety: In October 2007, we reported that there had been nearly 60 serious accidents or near misses at NNSA’s national laboratories since 2000. These incidents included worker exposure to radiation, inhalation of toxic vapors, and electrical shocks. Although no one was killed, many of the accidents caused serious harm to workers or damage to facilities. For example, at Los Alamos in July 2004, an undergraduate student who was not wearing required eye protection was partially blinded in a laser accident. Accidents and nuclear safety violations also contributed to the temporary shutdown of facilities at both Los Alamos and Livermore in 2004 and 2005. In the case of Los Alamos, laboratory employees disregarded established procedures and then attempted to cover up the incident, according to Los Alamos officials. Our review of nearly 100 reports issued since 2000 found that the contributing factors to these safety problems generally fell into three key categories: (1) relatively lax laboratory attitudes toward safety procedures; (2) laboratory inadequacies in identifying and addressing safety problems with appropriate corrective actions; and (3) inadequate oversight by NNSA. We reported in January 2008 on a number of long-standing nuclear and worker safety concerns at Los Alamos.included, among other things, the laboratory’s lack of compliance with safety documentation requirements, inadequate safety systems, radiological exposures, and enforcement actions for significant violations of nuclear safety requirements that resulted in civil penalties totaling nearly $2.5 million. In October 2008, we reported that DOE’s Office of Health, Safety, and Security—which, among other things, develops, oversees, and helps enforce nuclear safety policies at DOE and NNSA sites—fell short of fully meeting our elements of effective independent oversight of nuclear safety.independently was limited because it had no role in reviewing technical analyses that help ensure safe design and operation of nuclear facilities, and the office had no personnel at DOE sites to provide independent safety observations. With regard to security: In June 2008, we reported that significant security problems at Los Alamos had received insufficient attention. The laboratory had over two dozen initiatives under way that were principally aimed at reducing, consolidating, and better protecting classified resources but had not implemented complete security solutions to address either classified parts storage in unapproved storage containers or weaknesses in its process for ensuring that actions taken to correct security deficiencies were completed. Furthermore, Los Alamos had implemented initiatives that addressed a number of previously identified security concerns but had not developed the long-term strategic framework necessary to ensure that its fixes would be sustained over time. Similarly, in October 2009, we reported that Los Alamos had implemented measures to enhance its information security controls, but significant weaknesses remained in protecting the information stored on and transmitted over its classified computer network. A key reason for this was that the laboratory had not fully implemented an information security program to ensure that controls were effectively established and maintained. In March 2009, we reported about numerous and wide-ranging security deficiencies at Livermore, particularly in the ability of Livermore’s protective force to assure the protection of special nuclear material and the laboratory’s protection and control of classified matter. Livermore’s physical security systems, such as alarms and sensors, and its security program planning and assurance activities were also identified as areas needing improvement. Weaknesses in Livermore’s contractor self-assessment program and the NNSA Livermore Site Office’s oversight of the contractor contributed to these security deficiencies at the laboratory. According to one DOE official, both programs were “broken” and missed even the “low-hanging fruit.” The laboratory took corrective action to address these deficiencies, but we noted that better oversight was needed to ensure that security improvements were fully implemented and sustained. We reported in December 2010 that NNSA needed to improve its contingency planning for its classified supercomputing operations. All three NNSA laboratories had implemented some components of a contingency planning and disaster recovery program, but NNSA had not provided effective oversight to ensure that the laboratories’ contingency and disaster recovery planning and testing were comprehensive and effective. In particular, NNSA’s component organizations, including the Office of the Chief Information Officer, were unclear about their roles and responsibilities for providing oversight in the laboratories’ implementation of contingency and disaster recovery planning. In March 2010, the Deputy Secretary of Energy announced a new effort— the 2010 Safety and Security Reform effort—to revise DOE’s safety and security directives and reform its oversight approach to “provide contractors with the flexibility to tailor and implement safety and security programs without excessive federal oversight or overly prescriptive departmental requirements.” We are currently reviewing the reform of DOE’s safety directives and the benefits DOE hopes to achieve from this effort for, among others, the House Committee on Energy and Commerce. We expect to issue our report next month. Nevertheless, our prior work has shown that ineffective NNSA oversight of its contractors has contributed to many of the safety and security problems across the nuclear security enterprise and that NNSA faces challenges in sustaining improvements to safety and security performance. NNSA faces a complex task in planning, budgeting, and ensuring the execution of interconnected activities across the nuclear security enterprise. Among other things, maintaining government-owned facilities that were constructed more than 50 years ago and ensuring M&O contractors are sustaining critical human capital skills that are highly technical in nature and limited in supply are difficult undertakings. Over the past decade, we have made numerous recommendations to DOE and NNSA to improve their management and oversight practices. DOE and NNSA have acted on many of these recommendations, and we will continue to monitor progress being made in these areas. In the current era of tight budgets, Congress and the American taxpayer have the right to know whether investments made in the nuclear security enterprise are worth the cost. However, NNSA currently lacks the basic financial information on the total costs to operate and maintain its essential facilities and infrastructure, leaving it unable to identify return on investment or opportunities for cost savings. NNSA is now proposing to spend decades and tens of billions of dollars to modernize the nuclear security enterprise, largely by replacing or refurbishing aging and decaying facilities at its sites across the United States. Given NNSA’s record of weak management of its major projects, we believe that careful federal oversight will be critical to ensure this time and money are spent in as an effective and efficient manner as possible. With regard to the concerns that DOE’s and NNSA’s oversight of the laboratories’ activities have been excessive and that safety and security requirements are overly prescriptive and burdensome, we agree that excessive oversight and micromanagement of contractors’ activities is not an efficient use of scarce federal resources. Nevertheless, in our view, the problems we continue to identify in the nuclear security enterprise are not caused by excessive oversight, but instead result from ineffective oversight. Given the critical nature of the work the nuclear security enterprise performs and the high-hazard operations it conducts—often involving extremely hazardous materials, such as plutonium and highly enriched uranium, that must be stored under high security to protect them from theft—careful oversight and stringent safety and security requirements will always be required at these sites It is also important in an era of scarce resources that DOE and NNSA ensure that the work conducted by the nuclear security enterprise is primarily focused on its principal mission—ensuring the safety and reliability of the nuclear weapons stockpile. DOE has other national laboratories capable of conducting valuable scientific research on issues as wide-ranging as climate change or high-energy physics, but there is no substitute for the sophisticated capabilities and highly-skilled human capital present in the nuclear security enterprise for ensuring the credibility of the U.S. nuclear deterrent. Chairman Turner, Ranking Member Sanchez, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff have any questions about this testimony, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Allison Bawden, Ryan T. Coles, and Jonathan Gill, Assistant Directors, and Patrick Bernard, Senior Analyst. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The National Nuclear Security Administration (NNSA), a separately organized agency within the Department of Energy (DOE), is responsible for managing its contractors nuclear weapon- and nonproliferation-related national security activities in laboratories and other facilities, collectively known as the nuclear security enterprise. GAO designated DOEs management of its contracts as an area at high risk of fraud, waste, and abuse. Progress has been made, but GAO continues to identify problems across the nuclear security enterprise, from projects cost and schedule overruns to inadequate oversight of safety and security at NNSAs sites. Laboratory and other officials have raised concerns that federal oversight of the laboratories activities has been excessive. With NNSA proposing to spend tens of billions of dollars to modernize the nuclear security enterprise, it is important to ensure scarce resources are spent in an effective and efficient manner. This testimony addresses (1) NNSAs ability to produce budget and cost data necessary to make informed management decisions, (2) improving NNSAs project and contract management, and (3) DOEs and NNSAs safety and security oversight. It is based on prior GAO reports issued from August 2000 to January 2012. DOE and NNSA continue to act on the numerous recommendations GAO has made in improving budget and cost data, project and contract management, and safety and security oversight. GAO will continue to monitor DOEs and NNSAs implementation of these recommendations. NNSA has successfully ensured that the nuclear weapons stockpile remains safe and reliable in the absence of underground nuclear testing, accomplishing this complicated task by using state-of-the-art facilities as well as the skills of top scientists. Nevertheless, NNSA does not have reliable enterprise-wide management information on program budgets and costs, which potentially increases risk to NNSAs programs. For example, in June 2010, GAO reported that NNSA could not identify the total costs to operate and maintain essential weapons activities facilities and infrastructure. In addition, in February 2011, GAO reported that NNSA lacks complete data on, among other things, the condition and value of its existing infrastructure, cost estimates and completion dates for planned capital improvement projects, and critical human capital skills in its contractor workforce that are needed for its programs. As a result, NNSA does not have a sound basis for making decisions on how to most effectively manage its portfolio of projects and other programs and lacks information that could help justify future budget requests or target cost savings opportunities. NNSA recognizes that its ability to make informed decisions is hampered and is taking steps to improve its budget and cost data. For more than a decade and in numerous reports, GAO found that NNSA has continued to experience significant cost and schedule overruns on its major projects. For example, in 2000 and 2009, respectively, GAO reported that NNSAs efforts to extend the operational lives of nuclear weapons in the stockpile have experienced cost increases and schedule delays, such as a $300 million cost increase and 2-year delay in the refurbishment of one warhead and a nearly $70 million increase and 1-year delay in the refurbishment of another warhead. NNSAs construction projects have also experienced cost overruns. For example, GAO reported that the cost to construct a modern Uranium Processing Facility at NNSAs Y-12 National Security Complex experienced a nearly seven-fold cost increase from between $600 million and $1.1 billion in 2004 to between $4.2 billion and $6.5 billion in 2011. Given NNSAs record of weak management of major projects, GAO believes careful federal oversight of NNSAs modernization of the nuclear security enterprise will be critical to ensure that resources are spent in as an effective and efficient manner as possible. NNSAs oversight of safety and security in the nuclear security enterprise has also been questioned. As work carried out at NNSAs sites involves dangerous nuclear materials such as plutonium and highly enriched uranium, stringent safety procedures and security requirements must be observed. GAO reported in 2008 on numerous safety and security problems across NNSAs sites, contributing, among other things, to the temporary shutdown of facilities at both Los Alamos and Lawrence Livermore National Laboratories in 2004 and 2005, respectively. Ineffective NNSA oversight of its contractors activities contributed to many of these incidents as well as relatively lax laboratory attitudes toward safety procedures. In many cases, NNSA has made improvements to resolve these safety and security concerns, but better oversight is needed to ensure that improvements are fully implemented and sustained. GAO agrees that excessive oversight and micromanagement of contractors activities are not an efficient use of scarce federal resources, but that NNSAs problems are not caused by excessive oversight but instead result from ineffective departmental oversight. |
The nation’s surface transportation systems facilitate mobility through an extensive network of infrastructure and operators, as well as through the vehicles and vessels that permit passengers and freight to move within the system. Maintaining the systems is critical to sustaining America’s economic growth. This is especially important given that projected increases in freight tonnage will likely place pressures on these systems. According to the Federal Highway Administration, domestic and international freight tonnage across all surface modes will increase 41 percent, from 14.4 billion tons in 1998 to 20.3 billion tons in 2010. According to the forecasts, by 2010, 15.6 billion tons are projected to move by truck, a 44 percent increase; 3 billion tons by rail, a 32 percent increase; and 1.5 billion tons by water, a 27 percent increase. Some freight may be moved by more than one mode before reaching its destination, such as moving by ship for one segment of the trip, then by truck to its final destination. Over 95 percent of the U.S. overseas freight tonnage is shipped by sea. The United States accounts for 1 billion metric tons, or nearly 20 percent of the world’s oceanborne trade. As the world’s leading maritime trading nation, the United States depends on a vast marine transportation system. In addition to the economic role it plays, the system also has an important role in national defense; serves as an alternative transportation mode to roads and rails; and provides recreational value through boating, fishing, and cruises. Traditionally, federal participation in the maritime industry has been directed mainly at projects related to “waterside” issues, such as keeping navigation channels open by dredging, icebreaking, or improving the system of locks and dams; maintaining navigational aids such as lighthouses or radio systems; and monitoring the movement of ships in and out of the nation’s coastal waters. Federal participation has generally not extended to “landside” projects related to ports’ capabilities, such as building terminals or piers and purchasing cranes or other equipment to unload cargo. These traditional areas of federal assistance are under pressure, according to a congressionally mandated report issued by the Department of Transportation in 1999, which cites calls to modernize aging structures and dredge channels to new depths to accommodate larger ships. Since this report, and in the aftermath of September 11, the funding focus has further expanded to include greater emphasis on port security. Many of the security improvements will require costly outlays for infrastructure, technology, and personnel. For example, when the Congress recently made $92.3 million in federal funding available for port security as part of a supplemental appropriations bill, the Transportation Security Administration received grant applications totaling almost $700 million. With growing system demands and increased security concerns, some stakeholders have suggested a different source of funding for the marine transportation system. For example, U.S. public port authorities have advocated increased federal funding for harbor dredging. Currently, funding for such maintenance is derived from a fee on passengers and the value of imported and domestic cargo loaded and unloaded in U.S. ports. Ports and shippers would like to see funding for maintenance dredging come from the general fund instead, and there was legislation introduced in 1999 to do so. Regarding funding for security, ports are seeking substantial federal assistance to enhance security in the aftermath of the events of September 11. In other work we have conducted on port security, port and private-sector officials have said that they believe combating terrorism is the federal government’s responsibility and that, if additional security is needed, the federal government should provide or pay for it. Unlike the funding approach used for the aviation and highway transportation systems, which are primarily funded by collections from users of the systems, the commercial marine transportation system relies heavily on general tax revenue. For all three transportation systems, most of the revenue collected from users of the systems was deposited into trust fund accounts. Figure 1 summarizes the expenditure and assessment comparisons across the three transportation systems. During fiscal years 1999 through 2001, federal agencies expended an average of $3.9 billion each year on the marine transportation system with about 80 percent of the funding coming from the general revenues. During the same period, federal agencies expended an average of $10 billion each year on the aviation system and $25 billion each year on the highway system. The vast majority of the funding for these expenditures came from trust fund accounts. (See app. II.) Federal agencies collected revenue from assessments on users of all three transportation systems during fiscal years 1999 through 2001. Collections from assessments on system users during this period amounted to an average of $1 billion each year from marine transportation system users, $11 billion each year from aviation system users, and $34 billion each year from highway system users. Most of the collections for the three systems were deposited into trust funds that support the marine, aviation, and highway transportation systems. (See app. III.) Trust funds that support the marine transportation system include the Harbor Maintenance Trust Fund and the Inland Waterways Trust Fund. Trust funds that support the aviation and highway transportation systems include the Airport and Airway Trust Fund and the Highway Trust Fund. The federal government assesses customs duties on goods imported into the United States and the majority of these collections are deposited into the U.S. Treasury’s general fund to be used for the support of federal activities. As can be seen in figure 2, the amounts from customs duties levied on imported goods carried through the marine transportation system are more than triple the combined amounts collected from customs duties levied on the goods carried through the aviation and highway systems. During fiscal years 1999 through 2001, customs duties on imported goods shipped through the transportation systems averaged $15.2 billion each year for the marine transportation system, $3.7 billion for the aviation system, and $928 million for the highway system. (See app. IV for details on customs duty collections by year.) Some maritime stakeholders, particularly port owners and operators, have proposed using a portion of the customs duties for infrastructure improvements to the marine transportation system. They point out that the marine transportation system is generating billions of dollars in revenue, and some of these funds should be returned to maintain and enhance the system. However, unlike transportation excise taxes, customs duties are taxes on the value of imported goods paid by importers and ultimately their consumers—not on the users of the system—and have traditionally been viewed as revenues to be used for the support of the general activities of the federal government. Notwithstanding the general trend, a portion of revenues from customs duties are currently earmarked for agriculture and food programs, migratory bird conservation, aquatic resources, and reforestation. It should be noted, however, that in these cases, some relationship exists between the goods being taxed and the uses for which the taxes are earmarked. Designating a portion of the remaining customs fees for maritime uses would not represent a new source of capital for the federal government, but rather it would be a draw on the general fund of the U.S. Treasury. This could lead to additional deficit financing, unless other spending were cut or taxes were increased. Some maritime industry stakeholders have suggested that substantial new investments in the maritime infrastructure by federal, state, and local governments and by the private sector may be required because of an aging infrastructure, changes in the shipping industry, and increased concerns about security. These growing and varied demands for increased investments in the maritime transportation system heighten the need for a clear understanding about the federal government’s purpose and role in providing funding for the system and for a sound investment approach to guide federal participation. In examining federal investment approaches across many national activities, we have found that issues such as these are best addressed through a systematic framework. As shown in figure 2, this framework has the following four components that potentially could be applied to all transportation systems: Set national goals for the system. These goals, which would establish what federal participation in the system is designed to accomplish, should be specific and measurable. Define clearly what the federal role should be relative to other stakeholders. This step is important to help ensure that federal participation supplements and enhances participation by others, rather than simply replacing their participation. Determine which funding tools and other approaches, such as alternatives to investment in new infrastructure, will maximize the impact of any federal investment. This step can help expand the capacity to leverage funding resources and promote shared responsibilities. Ensure that a process is in place for evaluating performance periodically so that defined goals, roles, and approaches can be reexamined and modified, as necessary. An initial decision for Congress when evaluating federal investments concerns the goals of the marine transportation system. Clearly defined national goals can serve as a basis for guiding federal participation by charting a clear direction, establishing priorities among competing issues, specifying the desired results, and laying the foundation for such other decisions as determining how assistance will be provided. At the federal level, measuring results for federal programs has been a longstanding objective of the Congress. The Government Performance and Results Act of 1993 has become the primary legislative framework through which agencies are required to set strategic and annual goals that are based on national goals, measure performance, and report on the degree to which goals are met and on what actions are needed to achieve or modify goals that have not been met. Establishing clear goals and performance measures for the marine transportation system is critical to ensuring both a successful and a fiscally responsible effort. Before national goals for the system can be established, however, an in- depth understanding of the relationship of the system to other transportation modes is required. Transportation experts highlight the need to view the system in the context of the entire transportation system in addressing congestion, mobility, and other challenges and, ultimately, investment decisions. For example, congestion challenges often occur where modes connect or should connect, such as ports where freight is transferred from one mode to another. The connections require coordination of more than one mode of transportation and cooperation among multiple transportation providers and planners. A systemwide approach to transportation planning and funding, as opposed to focus on a single mode or type of travel, could improve the focus on outcomes related to customer or community needs. Meaningful goal setting also requires a comprehensive understanding of the scope and extent of issues and priorities facing the marine transportation system. However, there are clear signs that certain key issues and priorities are not yet understood well enough to establish meaningful goals for the system. For example, a comprehensive analysis of the issues and problems facing the marine transportation system has not yet been completed. In setting goals for investment decisions, leading organizations usually perform comprehensive needs assessments to obtain a clear understanding of the extent and scope of their issues, problems, and needs and, ultimately, to identify resources needed. These assessments should be results-oriented in that they determine what is needed to obtain specific outcomes rather than what is needed to maintain or expand existing capital stock. Developing such information is important for ensuring that goals are framed in an adequate context. The call by many ports for federal assistance in dredging channels or harbors to 50 feet is an example. Dredging to 50 feet allows a port to accommodate the largest of the container ships currently being constructed and placed in service. However, developing the capacity to serve such ships is no guarantee that companies with such ships will actually choose to use a port. Every port’s desire to be competitive by having a 50-foot channel could thus lead to a situation in which the nation as a whole has an overcapacity for accommodating larger ships. The result, at least for the excess capacity, would signal an inefficient use of federal resources that might have been put to better use in other ways. Establishing the roles of the federal, state, and local governments and private entities will help to ensure that goals can be achieved. The federal government is only one of many stakeholders in the marine transportation system. While these various stakeholders may all be able to share a general vision of the system, they are likely to diverge in the priorities and emphasis they place on specific goals. For example, the federal government, with its national point of view, is in a much different position than a local port intensely involved in head-to-head competition with other ports for the business of shipping companies or other businesses. For a port, its own infrastructure is paramount, while the federal government’s perspective is focused on the national and broader public interest. Since there are so many stakeholders involved with the marine transportation system, achieving national goals for the system hinges on the ability of the federal government to forge effective partnerships with nonfederal entities. Decision makers have to balance national goals with the unique needs and interests of all nonfederal stakeholders in order to leverage the resources and capabilities that reside within state and local governments and the private sector. Future partnering among key maritime stakeholders may take on a different form as transportation planners begin focusing across transportation modes in making investment decisions instead of making investment decisions for each mode separately. The Alameda Corridor Program in the Los Angeles area provides an example of how effective partnering allowed the capabilities of the various stakeholders to be more fully utilized. Called the Alameda Corridor because of the street it parallels, the program created a 20-mile, $2.4 billion railroad express line connecting the ports of Los Angeles and Long Beach to the transcontinental rail network east of downtown Los Angeles. The express line eliminates approximately 200 street-level railroad crossings, relieving congestion and improving freight mobility for cargo. This project made substantial use of local stakeholders’ ability to raise funds. While the federal government participated in the cost, its share was only about 20 percent of the total cost, most of which was in the form of a loan rather than a grant. Just as partnerships offer opportunities, they also pose risks based upon the different interests reflected by each stakeholder. While gaining the opportunity to leverage the resources and capabilities of partners, each of these nonfederal entities has goals and priorities that are independent of the federal government. For the federal government, there is concern that state and local governments may not share the same priorities for use of the federal funds. This may result in nonfederal entities replacing or “supplanting” their previous levels of commitment in areas with new federal resources. For example, in the area of port security, there is a significant funding need at the local level for overtime pay for police and security guards. Given the degree of need, if more federal funding was made available, local interests might push to apply federal funding in this way, thereby transferring a previously local function to the federal arena. In moving toward federal coverage of basic public services, the Congress and federal officials would be substantially expanding the federal role. When evaluating federal investments, a careful choice of the approaches and funding tools that would best leverage federal funds in meeting identified goals should be made. A well-designed funding approach can help encourage investment by other stakeholders and maximize the application of limited federal dollars. An important step in selecting the appropriate approach is to effectively harness the financial capabilities of local, state, and private stakeholders. The Alameda Corridor Program is a good example. In this program, state and local stakeholders had both a financial incentive to relieve congestion and the commitment and ability to bring financial resources to bear. Some other ports may not have the same level of financial incentives or capabilities to undertake projects largely on their own. For example, in studying the extent to which Florida ports were able to implement a set of security requirements imposed by the state, we found that some ports were able to draw on more financial resources than others, based on such factors as size, economic climate, and funding base. While such information would be valuable in crafting federal assistance, it currently is largely unavailable. Relatively little is known about the extent of state, local, and private-sector funding resources across the country. The federal government has a variety of funding tools potentially available for use such as grants, direct loans, loan guarantees, tax expenditures, and user fees. Through cost sharing and other arrangements, the federal government can use these approaches to help ensure that federal funds supplement—and not supplant—funds from other stakeholders. For example, an effective use of funding tools, with appropriate nonfederal matches and incentives, can be valuable in implementing a national strategy to support federal port investments, without putting the government in the position of choosing winners or losers. Federal approaches can take other forms besides those that relate specifically to making funding available. These following approaches allow increased output without making major capital investments: Demand management. Demand management is designed to reduce travel at the most congested times and on the most congested routes. One demand management strategy involves requiring users to pay more to use congested parts of the system during such periods, with the idea that the charge will provide an incentive for some users to shift their use to a less congested time or to less congested routes or transportation modes. On inland waterways, for example, congestion pricing for locks—that is, charging a toll during congested periods to reflect the additional cost of delay that a vessel imposes on other vessels—might be a way to space out demand on the system. Many economists generally believe that such surcharges or tolls enhance economic efficiency by making operators take into account the external costs they impose on others in deciding when, where, and how to travel. Technology improvements. Instead of making extensive modifications to infrastructure such as locks and dams, it may be possible to apply federal investments to technology that makes the existing system more efficient. For example, technological improvements may be able to help barges on the inland waterways navigate locks in inclement weather, thereby reducing delays on the inland waterway system. Maintenance and rehabilitation. Enhancing capacity of existing infrastructure through increased maintenance and rehabilitation is an important supplement to, and sometimes a substitute for, building new infrastructure. Maintenance and rehabilitation can improve the speed and reliability of passenger and freight travel, thereby optimizing capital investments. Management and operation improvements. Better management and operation of existing infrastructure may allow the existing transportation system to accommodate additional travel without having to add new infrastructure. For example, the U.S. Army Corps of Engineers is investigating the possibility of automating the operation of locks and dams on the inland waterways to reduce congestion at bottlenecks. Regardless of the tools selected, results should be evaluated and lessons learned should be incorporated into the decision-making process. Evaluating the effectiveness of existing or proposed federal investment programs could provide decision makers with valuable information for determining whether intended benefits have been achieved and whether goals, responsibilities, and approaches should be modified. Such evaluations are also useful for better ensuring accountability and providing incentives for achieving results. Leading organizations that we have studied have stressed the importance of developing performance measures and linking investment decisions and their expected outcomes to overall strategic goals and objectives.Hypothetically, for example, one goal for the marine transportation system might be to increase throughput (that is, the volume of cargo) that can be transported through a particular lock and dam system on the nation’s inland waterways. A performance measure to gauge the results of an investment for this goal might be the increased use (such as number of barges passing through per hour) that results from this investment and the economic benefits associated with that increase. In summary, Mr. Chairmen, the projected increases in freight tonnage will likely place pressures on the nation’s surface transportation systems. Maintaining these systems is critical to sustaining America’s economic growth. Therefore, there is a need to view various transportation modes from an integrated standpoint, particularly for the purposes of developing and implementing a federal investment strategy and alternative funding approaches. In such an effort, the framework of goals, roles, tools, and evaluation can be particularly helpful—not only for marine transportation funding, but for other modes as well. Mr. Chairmen, this concludes my testimony. I will be happy to respond to any questions you or other Members may have. To determine the amount of federal expenditures to support the commercial marine, aviation, and highway transportation systems and the amount of collections from federal assessments on the users of these systems for fiscal years 1999, 2000, and 2001, we reviewed prior GAO reports and other relevant documents, and interviewed officials from the Office of Management and Budget and various industry representatives. On the basis of this determination, we contacted 15 federal agencies and asked them to provide information on the expenditures and collectionsthat were specific to the transportation systems, relying on each agency to identify expenditures and collections related to activities that support the transportation systems. In addition, we also received data from the U.S. Customs Service on the amount of duty collected on commodities imported by the transportation modes. The U.S. Customs Service provided estimates, developed by the U.S. Census Bureau, on the percent of collections that were attributable to water, sea, and land transportation modes. We applied these percentages to the total customs duties collected for fiscal years 1999, 2000, and 2001 provided by the U.S. Customs Service to compute the amount of total customs duties collected by the marine, aviation, and highway transportation systems each year. We performed limited reasonableness tests on the data by comparing the data with the actual trust fund outlays contained in the budget of the U.S. government for fiscal years 2001, 2002, and 2003. Although we had each agency validate the data provided, we did not verify agency expenditures and collections. To identify initial considerations that could help the Congress in addressing whether to change the scope or nature of federal investments in the marine transportation system, we conducted a review of prior GAO reports and other relevant studies to identify managerial best practices in establishing strategic plans and federal investment approaches. We also interviewed U.S. Army Corps of Engineers and Department of Transportation officials to obtain information on the current state of the commercial marine transportation system, the ability of the system to keep pace with growing demand, and activities that are under way to assess the condition and capacity of the infrastructure. Our work was carried out from January 2002 to September 2002 in accordance with generally accepted government auditing standards. Federal agencies spent an average of $3.9 billion annually on the marine transportation system, $10 billion annually on the aviation system, and $25 billion annually on the highway system. Whereas the primary source of funding for the marine transportation system is general tax revenues, the vast majority of federal funding invested in both the aviation and highway systems came from assessments on users of the systems. During the three- year period, general revenues were the funding source for 80 percent of the expenditures for the marine transportation system. In contrast, assessments on system users were the funding source for 88 percent of the amount spent on the aviation system and nearly 100 percent of the amount spent on the highway system. Federal agencies collected an average of $1 billion annually from users of the marine transportation system, $11.1 billion annually from users of the aviation system, and $33.7 billion annually from users of the highway system. For all three transportation systems, most of the collections were deposited into trust fund accounts. During the three-year period, 85 percent of the amounts collected from marine transportation system users, 94 percent of the amounts collected from aviation system users, and nearly 100 percent of the amounts collected from highway system users were deposited into trust fund accounts. | This testimony discusses challenges in defining the federal role with respect to freight transportation issues. There are concerns that the projected increases in freight tonnage for all transportation modes will place pressures on the marine, aviation, and highway transportation systems. As a result, there is growing awareness of the need to view various transportation modes, and freight movement in particular, from an integrated standpoint, particularly for the purposes of developing and implementing a federal investment strategy and considering alternative funding approaches. The federal approach for funding the marine transportation system relies heavily on general revenues, although the approach for funding the aviation and highway systems relies almost exclusively on collections from users of the systems. During fiscal years 1999 through 2001, customs duties on imported goods transported through the transportation systems averaged $15 billion each year for the marine transportation system, $4 billion each year for the aviation system, and $900 million each year for the highway system. Customs duties are taxes on the value of imported goods and have traditionally been viewed as revenues to be used for the support of the general activities of the federal government. Diverse industry stakeholders believe that substantial new investments in the maritime infrastructure may be required from public and private sources because of an aging infrastructure, changes in the shipping industry, and increased concerns about security. A systematic framework would be helpful to decision makers as they consider the federal government's purpose and role in providing funding for the system and as they develop a sound investment approach to guide federal participation. In examining federal investment approaches across many national activities, GAO has identified four key components of such a framework--establishing national goals, defining the federal role, determining appropriate funding tools, and evaluating performance--which could potentially be applied to all transportation systems. |
LIHTCs follow a multistep process that begins with the allocation of tax credits to HFAs. The process of allocating, awarding, and using LIHTCs is depicted in figure 1. As the figure shows, there are four primary steps in the LIHTC process. 1. HFAs receive tax credit allocations. State ceilings for LIHTCs are allocated by statutory formula to states annually according to population, with a minimum amount awarded to states with small populations. For 2012, the formula was $2.20 per capita or a minimum of $2,525,000. 2. Developers apply to the states for tax credits. To apply for tax credits, a developer must submit a detailed proposal to an HFA. To qualify for consideration, a project must meet certain requirements, such as reserving specified percentages of available units for lower income households and restricting rents for these households to 30 percent of a calculated income limit. 3. HFAs award tax credits to selected housing projects. The potential to earn tax credits is competitively awarded to housing projects in accordance with states’ QAPs. QAPs outline a state’s affordable housing priorities and set out its procedure for ranking the projects on the basis of how well they meet state priorities and selection criteria that are appropriate to local conditions. The QAP must give preference to projects that serve the tenants with the lowest incomes, serve qualifying tenants for the longest period of time, and are located in a qualified census tract (QCT) and contribute to a local community revitalization plan. Developers receiving tax credit allocations have 2 years to complete their projects and may not claim the credits until the projects are placed in service. 4. Investors receive tax benefits. Investment partnerships are a primary source of equity financing for LIHTC projects. Syndicators recruit investors willing to become partners in LIHTC partnerships. The money investors pay for the partnership interest is paid into the LIHTC project as equity financing. Although investors are buying an interest in a rental housing partnership, this process is commonly referred to as buying tax credits because they receive tax credits in return for their investment. Once the LIHTC project is placed in service, or ready for occupancy, investors can receive their share of the credits each year of the 10-year credit period and can use the credit to offset federal income taxes otherwise owed on their tax returns, as long as the project meets the LIHTC requirements. The amount of tax credits a project can receive depends on several factors, including the applicable fraction and the applicable percentage (see table 1). The applicable fraction, or the percentage of units in the building considered to be qualified low-income units, is the lesser of (1) the total square feet of the low-income units divided by the total square feet of all the units, or (2) the number of the low-income units divided by the total number of units. Regarding the applicable percentage, there are two credit rates (referred to as the 9 percent and 4 percent rates) for the LIHTC program that determine how much of a project’s costs the allocated credits can cover. The credit rate takes into account whether the project is newly constructed or acquired and rehabilitated and the extent to which it uses other federal subsidies. Most new construction and substantial rehabilitation projects are eligible for the 9 percent rate, which allows investors to claim credits for about 9 percent of the eligible basis annually over a 10-year period.the 9 percent credit floated based on a statutory formula and often fell below 9 percent. Because the LIHTC program is jointly administered by federal and state governments, agencies at both levels played roles in implementing HERA’s changes to the program. At the federal level, IRS and Treasury’s Office of Tax Policy provided new guidance for program stakeholders. At the state level, HFAs modified their QAPs for allocating tax credits. HERA made changes to the LIHTC program that affected various parties, including taxpayers, HFAs, and project owners, and IRS and Treasury provided guidance and took other actions into 2012 to implement these changes. To better ensure that information on the HERA changes was widely accessible, IRS issued revenue procedures and notices, made changes to forms and form instructions, and circulated newsletters to program stakeholders. More specifically, its actions included the following: Issuing (1) a revenue procedure for taxpayers to follow when choosing to no longer maintain a surety bond, as permitted by the HERA change described in table 2, item 6; (2) a notice that the 9 percent floor (table 2, item 2) would apply to eligible projects that had committed to a lower rate before HERA; and (3) a newsletter to program stakeholders describing new income limits related to the HERA “hold harmless” provisions described in table 2, item 9. The income limits are a percentage of the relevant area’s median gross income and are the basis for calculating the gross rent that a LIHTC project can charge. Updating instructions for Form 8609, “Low-Income Housing Credit Allocation and Certification,” to reflect changes involving the 9 percent floor (table 2, item 2), federally subsidized buildings (table 2, item 4), and the HERA basis boost (table 2, item 3). HFAs use the form to report LIHTC allocations for buildings to IRS, and building owners use it to certify such things as a building’s eligible basis, qualified basis, and placed-in-service date. Revising the Guide for Completing Form 8823, Low-Income Housing Credit Agencies Report of Noncompliance or Building Disposition, a guide intended to help housing agencies identify and consistently report noncompliance issues to IRS. Discussing through internal memorandums whether regulations needed to be updated because of HERA and exploring implementation issues that surfaced. For instance, IRS internally considered questions from the division tracking its own implementation of HERA’s LIHTC provisions about whether the changes required updates to regulations governing general public use requirements mentioned in table 2, item 8. An official from IRS’s Office of Chief Counsel told us they determined that no updates were needed. Program stakeholders we spoke with, including HFAs, industry associations, syndicators, and developers, generally said that IRS’s actions to implement the HERA changes were sufficient, and that they were satisfied with the agency’s efforts. However, they raised two concerns in our discussions that IRS and Treasury have continued to consider and act on. The first involved the HERA provision noted in table 2, item 5, that eased restrictions on using LIHTCs to acquire an existing building. Before HERA, acquisition costs for an existing building generally would not be eligible for LIHTCs unless the building had been placed in service 10 years or more before it was acquired. HERA waived this 10-year rule for any federally or state-assisted building—that is, any building that was “substantially” assisted, financed, or operated under certain federal or state programs or laws. In response to this HERA provision, the IRS Chief Counsel and Treasury placed clarification of the meaning of “substantially” on priority lists of guidance projects for July 2010 through June 2011 and July 2011 through June 2012. With over 300 guidance projects on the priority list for 2011- 2012, IRS and Treasury had not issued any guidance defining “substantially” as of October 2012. Agency officials cited the complexity of the issue and other agency priorities as reasons for the delay.Treasury official was not yet able to tell us when the agency would complete the guidance, what it was likely to say, or whether it would resolve the definition of “substantially” for both federal and state subsidies at the same time. The relative importance of future guidance is unclear as stakeholders disagreed on the need to clarify the meaning of “substantially.” Some stakeholders said there was little need for clarification. However, one organization sought guidance from Treasury in 2009 and 2010 because, it said, the lack of a definition was delaying some acquisition projects. An IRS official agreed, saying the lack of guidance had delayed acquisition projects and resulted in the substitutions of other projects, such as construction of new buildings, for acquisitions. In addition, a Treasury official told us that the lack of guidance had likely made attorneys for potential LIHTC projects conservative in interpreting “substantially.” For example, some may have decided that all the units in a building must be federally subsidized in order to meet the definition. A second concern—related to HERA’s hold harmless provisions on income and rent limits—did not rise to the level of necessarily requiring formal guidance, but has received continued federal attention because of its complicated nature. The hold harmless provisions (table 2, item 9) are aimed at bolstering the financial viability of LIHTC projects by preventing rents from automatically falling when area income levels, on which the rents are based, decline. In so doing, the provisions resulted in a system in which, for instance, three projects on the same street could have three different sets of income and rent limits if they were placed in service in three different time periods. Accommodating all of the possibilities for different placed-in-service dates required projects to use multiple tables to find the applicable income and rent limits, and some program participants have found this confusing. Furthermore, the income and rent limits change annually when HUD publishes new area income levels. The owner of a LIHTC project must use the correct table, based on the building’s location and placed-in-service date, to determine the maximum income that a household may have to be a qualified low-income household, and the maximum gross rent that a household may be charged, based on the number of bedrooms in the unit, for the unit to qualify for the credit as a low-income unit. IRS issued explanatory newsletters about the hold harmless provisions, and IRS officials said they made public presentations to stakeholders about them, but some LIHTC program participants reported that the provisions were complicated, confusing, and hard to administer. For example, Texas HFA officials told us that the increase in the number of possible rent limits complicated communications with property owners and increased property owners’ compliance risks. A Vermont HFA official described how staff had to learn to calculate new limits, publish and distribute new tables, and explain the changes in their QAP. However, some HFAs told us that while the provisions were complex and burdensome, they had worked hard to understand them and had learned to work with them. IRS has continued to provide explanatory newsletters and IRS officials told us they made public presentations into 2012. A Treasury official acknowledged the complexity of the provisions and said further clarifying guidance might be warranted. However, the official also said that making a change to hold harmless guidance would require determining that the change merited more consideration than the many non-HERA topics that Treasury also needed to consider. HFAs we spoke with also took steps to implement the changes, including one of the changes HFAs generally thought was significant—the HERA basis boost. Our review of QAPs for nine HFAs and research by an industry group found that HFAs often modified their QAPs to implement the HERA basis boost but varied in how they used the new flexibility. Of the nine states we examined, eight modified their QAPs by revising their criteria for awarding the basis boost. According to a state official, the remaining HFA also revised its criteria for the basis boost but conveyed the changes to stakeholders through its website, public hearings, and newspapers. In general, states varied in the criteria they developed for awarding the basis boost. We analyzed NCSHA summaries of the factors that HFAs reported considering in awarding the HERA basis boost in 2009, the first full year after HERA’s enactment. According to the summaries, 30 of the 54 HFAs reporting cited specific factors beyond the single criterion given in HERA (financial feasibility). The other 24 HFAs cited financial feasibility or other general guidance (17), did not report any factors (1), or chose not to implement the HERA basis boost (6). Research by NCSHA in 2010 noted that some states applied the boost statewide and some applied it to more specific geographical areas, project types, or projects with certain characteristics. NCSHA cited examples of states targeting the basis boost to developments that had tenants of different income levels, involved expensive land, were in rural or tribal areas or areas affected by natural disasters, featured “green building” practices or preservation initiatives, or were transit oriented. States’ use of the basis boost also varied over time. Our analysis of NCSHA summaries for 2008 through 2010 showed that HFAs’ use of the HERA basis boost became more widespread over that period. More specifically, while 12 HFAs reported not having implemented the basis boost in 2008, this number dropped to 3 in 2010. For example, the Florida HFA did not begin to use the boost until the change appeared in its 2011 QAP because until then, the state was still benefitting from Gulf Opportunity Zone disaster credits and did not need the HERA basis boost. California HFA officials said they did not use the boost as much as some other states because California already had a large number of counties that were designated as difficult development areas (DDA) and had a state LIHTC program covering projects that might have benefitted from the HERA basis boost.HERA basis boost at all in 2012. The Massachusetts HFA began implementing the HERA basis boost in 2009 and continued to use it into 2012. In its 2009 plan, the HFA identified 20 locations that were eligible for the HERA basis boost, a number that rose to 35 in its 2012 plan. The officials said they did not use the HFAs also modified their QAPs and published technical information to reflect other program changes in HERA. For instance, soon after HERA was enacted, the Oregon HFA revised multiple sections of its QAP. In accordance with HERA changes, it added the historic nature of buildings and energy efficiency as criteria for awarding LIHTCs, updated policies on the availability of LIHTC projects for general public use, and inserted new policies on the use of the 9 percent floor. The Massachusetts HFA incorporated the increase in per capita allocations as well as the 9 percent floor into its 2008 QAP. In addition, some of the states we reviewed published technical information to help program stakeholders comply with HERA program changes. For example, as they had done in previous years, California HFA officials sent a memorandum to LIHTC project owners and applicants in December 2011 on revised rent and income limits the HFA had published, using information from HUD. HUD maintains a database of LIHTC-funded projects, which was last updated in July 2012, but the information it contains is incomplete. Although HUD has almost no direct administrative responsibility for the LIHTC program, as the federal government’s lead housing agency, it has been voluntarily collecting information on the program since 1996 because of the importance of these credits as a source of funding for low- income housing. HUD’s LIHTC Database, the largest source of federal information on the LIHTC program, aggregates project-level data that are voluntarily submitted by HFAs. HUD contracts with a consulting firm to help compile the database, which is updated annually and is available to the public on HUD’s website. Additionally, HUD sponsors studies of the LIHTC program that use these data. IRS, which jointly administers the program with HFAs, collects limited data that it needs to carry out its mission of administering and enforcing the internal revenue laws. It does not maintain the information needed to assess a housing production program, such as the types of tenants targeted and whether projects are in urban or rural areas. HUD’s LIHTC Database does not capture all LIHTC projects placed in service, for three main reasons. First, although most HFAs voluntarily report LIHTC project data to HUD each year, some do not report consistently. Forty-two of 56 HFAs submitted project data to HUD for each year from 2006 through 2010. In 2010, these 42 HFAs received about 89 percent of all per capita LIHTC allocations. Of the remaining 14 HFAs, 2 did not report projects in any of the 5 years, while 12 did not report each year, but did report for at least 2 of the years. For these 12, all of the nonreporting was for 2008 through 2010 (the most recent reporting year), a period in which some HFAs were struggling to comply with a HERA requirement that they collect data on tenant characteristics (e.g., race and income) for LIHTC projects, according to HUD and NCSHA officials. The HERA provision containing this requirement authorized $6.1 million for fiscal years 2009 through 2013 for HUD to, among other things, provide technical assistance to HFAs and compile the tenant data, but HUD never received any appropriations for these tasks. HUD is working to fulfill the requirement with existing resources. For example, HUD streamlined the project and tenant data collections by merging the two efforts. It also required HFAs to submit data in a standardized electronic format via a secure web portal. According to HUD, this change is significant, as the prior data collection process involved a HUD contractor that contacted each HFA and then standardized the collected data, which HFAs often maintained in different formats. HUD said that although some HFAs would need several years to make the transition, the new system was the most cost-effective long-term solution. HUD also said it recognized the problem of underreporting in recent years but that until the transition to the new data collection method was completed, its options were to either knowingly underreport properties placed in service or not release any data for those years. Second, in recent years, HUD has not identified or followed up on cases in which HFAs reported a substantially lower number of projects than in past years, although such information could potentially be incomplete. For example, HUD’s database showed that one state had between 23 and 49 projects placed in service each year from 2006 through 2009, but only 2 projects in 2010. When we followed up with the HFA in this state, HFA officials provided us with documentation showing that they had reported 37 projects for 2010. Similarly, HUD’s database showed that another state had 2 projects placed in service in 2008, compared with 90 or more in each of the 2 previous years. An official from this state’s HFA told us that the actual number for 2008 was 96 properties. We provided HUD with these and other examples for their review. According to a HUD official, before 2008 its contractor followed up with HFAs on these types of data anomalies but now places less emphasis on this function because of resource limitations and the HERA requirement for tenant data. Instead, the contractor now focuses on assisting HFAs with meeting the tenant data requirement and follows up only with HFAs that do not report any project data at all. Third, at the time they reported to HUD, HFAs may not have had information on all projects placed in service. Specifically, HFA officials said that delays between the date when a project was placed in service, the date a project owner reported it to the HFA, and the date the HFA recorded it in its information system could result in underreporting of projects. HUD instructs states to review the property information previously submitted and include information for these omitted properties. As a result, these omissions may be corrected in subsequent data submissions. Even when HUD did receive project data, much of it was incomplete, omitting information on project characteristics such as the type of location, construction, and tenants targeted. The proportion of missing information on project characteristics increased after 2007 (see table 3). For example, the proportion of missing information on the types of tenants targeted increased from 5 percent in 2006 to 28 percent in 2010. A HUD official noted that the HERA provision requiring HFAs to collect data on the characteristics of tenants in LIHTC projects had made it more challenging for HFAs to also report the project data with existing resources. In addition, a HUD official explained that across HFAs, different offices maintain tenant-level and project-level data. He said that HUD’s data request was often completed by the offices with the tenant data, which might not have detailed project information. The official added that he had emphasized the need for HFAs to direct HUD’s request for project data to the appropriate office in presentations to an HFA association and in communications with individual HFAs. However, according to HUD, resource limitations have prevented HUD and its contractor from performing thorough follow up with HFAs about missing information on project characteristics. Having complete data on the LIHTC program is important because of the program’s significance to overall federal efforts to meet the nation’s affordable housing needs. As previously noted, the LIHTC program is the largest subsidy program for constructing and rehabilitating low-income rental housing. Additionally, the program is used in conjunction with other federal housing programs, including HUD’s programs. For example, some LIHTC projects receive grants through HUD’s HOME Investment Partnership program and have mortgages that are insured by HUD’s Federal Housing Administration. HUD’s LIHTC Database is the federal government’s main source of information on LIHTC projects, and HUD and others have used data from 2007 and earlier—prior to some of the challenges discussed previously—to conduct research on the LIHTC program. For example, one study HUD sponsored examined the geographic distribution of LIHTC projects to assess whether program rules contribute to clustering of subsidized housing in central city and high-poverty areas. Another HUD-sponsored study examined whether LIHTC projects continue to provide affordable housing after the 15-year period in which they are required to do so. In addition, the Rental Policy Working Group established by the White House’s Domestic Policy Council has used the data to examine the potential for harmonizing and streamlining property inspection requirements for rental properties with multiple sources of federal funding, including LIHTCs. However, as we have seen, a number of challenges faced by HUD and HFAs have adversely affected the completeness of HUD’s database. Without more complete data on the number, location, and characteristics of LIHTC projects, the federal government’s ability to continue evaluating program outcomes and overall federal efforts to provide affordable housing is limited. According to HUD data as of July 2012, the 42 HFAs that submitted information for each year from 2006 through 2010 reported that more than 5,300 LIHTC projects were placed in service over the 5-year period (see table 4). In total, these projects used more than $3 billion in LIHTCs and contained more than 421,000 living units. The reported number of projects and units placed in service declined over the 5-year period, particularly after 2008; however, the lack of complete project data, as discussed previously, prevents a reliable analysis of actual program trends. Although data at the national level are limited, information from the nine HFAs we contacted provide some insight into changes in the number of projects placed in service after HERA was enacted in 2008. Six of the nine HFAs indicated that the number of projects declined substantially between 2008 and 2009, while the other three experienced either modest or no declines. For example, California HFA officials said they had 203 projects placed in service in 2008, compared with 140 in 2009. In contrast, Massachusetts HFA officials said they had 21 projects placed in service in both years. Of the six HFAs that had substantial declines, three continued to see decreases in 2010, while the remainder experienced modest to large increases in 2010. While a portion of LIHTC projects in HUD’s database lack information on location type, the data do indicate that the majority of LIHTC projects placed in service from 2006 through 2010 were located in metropolitan central and noncentral cities (e.g., suburbs). For each of these years, at least 69 percent of reported projects were in metropolitan areas, but given the proportion of projects with missing information on location type, trends in this characteristic cannot be precisely determined (see table 5). According to HUD data, the majority of reported LIHTC projects placed in service from 2006 through 2010 were newly constructed (see table 6). However, the amount of missing data on construction type after 2007 makes it impossible to draw accurate conclusions on potential changes in the proportion of projects that were newly constructed and those that were acquisition and rehabilitation projects. According to data reported to HUD, the most common types of tenants targeted by LIHTC projects in 2006 and 2007 were families and elderly tenants (see table 7). However, as previously noted, the proportion of projects in HUD’s database with missing information on tenant types increased substantially after 2007. As a result, any reported changes in types of tenants targeted are not definitive. In addition, HUD officials told us that HFAs may have used different criteria for determining whether a project was targeted to particular groups of tenants, potentially resulting in inconsistencies across HFAs. State and industry officials we spoke with said that isolating the effect of the HERA changes on the overall LIHTC market was difficult because of other program changes (e.g., creation of the Exchange Program) and economic developments (e.g., the recession and financial crisis) that occurred around the same time. Nonetheless, state and industry officials we spoke with identified specific LIHTC projects that they said would not have been completed without certain HERA provisions. In particular, they cited the temporary increase in per capita credit allocations, the temporary 9 percent floor, and the HERA basis boost as three provisions that helped the financial feasibility of some projects and likely prevented even further decreases in LIHTC projects after 2008. In addition, stakeholders said HERA changes particularly helped the financial feasibility of rural projects. Because of HERA’s temporary increase in per capita credit allocations, HFAs received tens of millions of dollars more in allocations in 2008 and 2009 than they would have otherwise. By statute, LIHTC allocation amounts are adjusted for inflation each calendar year, but for calendar years 2008 and 2009 only, HERA further increased allocations to each HFA. Adjusted for inflation, the per capita allocation in 2008 would have been $2.00, but HERA increased the amount to $2.20 that year and to $2.30 in 2009. The minimum allocation for small HFAs was increased to $2,555,000 in 2008 and $2,665,000 in 2009. Without HERA, HFAs would have received $61,836,050 less in per capita credits than they did in 2008 and $62,408,937 less in 2009. For 2010, LIHTC allocations returned to the path that would have been in place if HERA had not been enacted (see table 8). Some state officials we spoke with said that they allocated the additional credits to projects already under development and to new projects. For example, HFA officials in Michigan and Oregon told us that they used the additional credits to both fill funding gaps for projects that had previously received LIHTC allocations and to fund one or two additional projects in their states. Massachusetts HFA officials told us that they used the additional credits to finish projects that were in danger of not being completed because of drops in prices that investors were willing to pay for LIHTCs. Although HFAs received additional credits in 2008 and 2009, developers also returned more unused credits to HFAs in these years. According to data from NCSHA, the total amount of credits developers returned to HFAs increased substantially in 2008 and 2009. The amount of returned credits in 2009 was more than 6 times the amount in 2006 (see table 9). An NCSHA official explained that developers returned credits for several reasons. For example, the NCSHA official noted that in 2008, developers had trouble finding LIHTC investors, resulting in a higher-than-normal amount returned to the HFAs. Also, in 2009, the Recovery Act’s Exchange Program allowed HFAs to exchange returned credits for cash grants, resulting in a very high amount of returns that year. For 2009, the amount of returned credits included those that were returned and exchanged, as well as those returned and possibly reallocated to other developers. According to the NCSHA official, virtually all of the returned credits that were not exchanged were reallocated either the same year or the following year. Some state housing officials and industry stakeholders said that HERA’s temporary floor for the 9 percent credit helped the financial feasibility of individual projects. Owing to the floating credit rate prior to HERA, developers that received the 9 percent credit actually received a credit approximating 8 percent. By setting a floor of 9 percent for projects placed in service by the end of 2013, HERA increased the amount of credits these projects could receive. For example, if a pre-HERA project had an eligible basis of $1,000,000 and the floating rate for the 9 percent credit was 8 percent, that project would be eligible to receive $800,000 in credits ($80,000 per year for 10 years). In contrast, by setting a floor of 9 percent for the 9 percent credit, that same project would be eligible for $900,000 in credits ($90,000 per year for 10 years). Also, as previously noted, the HERA basis boost provision gave HFAs the ability to designate any building, regardless of location, as eligible for an enhanced credit of up to 130 percent of the building’s eligible basis rather than just those in a DDA or a QCT. One developer told us that every LIHTC project he had completed since the passage of HERA used the HERA basis boost, and that it and the 9 percent floor together had made a significant difference in his ability to complete projects. This developer cited a project in which these two provisions reduced a funding gap of $1,680,000 to $450,000, which the developer was able to close by other means. Another LIHTC developer noted that the 9 percent floor allowed LIHTC deals to be engineered with fewer funding sources and that in many cases such deals would not have been completed without this provision. In addition, North Carolina HFA officials told us that some projects had received tax credit awards in 2007 and 2008, but had funding gaps when the tax credit market collapsed and prices for tax credits fell before developers could secure equity from investors. For these projects, the HFA allowed developers to return their allocated credits and receive new credits with the 9 percent rate and the HERA basis boost, thus filling the funding gaps. According to the North Carolina officials, these HERA provisions helped in completing a total of 46 projects that likely would not otherwise have been completed. In addition, HFA officials in Oregon and Michigan noted that they used the HERA basis boost for permanent supportive housing—long-term housing projects with supportive services for homeless persons with disabilities or other barriers—which have lower income tenants. Similarly, HFA officials in Florida said that the HERA basis boost helped fund three projects that will be placed in service in either 2012 or 2013 for tenants that were homeless and had lower incomes. According to the officials, such projects are typically difficult to develop because project cash flows are limited because tenants may not have any income when they move in. HFA officials in Minnesota said that without the 9 percent floor, it would have been difficult to fund projects serving the long-term homeless, those with special needs, and those with lower incomes. According to state housing officials and industry participants, certain HERA provisions helped mitigate some of the challenges associated with developing projects in rural areas. For example, the maximum amount of rent a project owner can charge is based on the area’s income limits. According to officials from the Council for Affordable and Rural Housing, because rural areas often have lower income limits compared with urban areas, rural projects also often have lower cash flows from rents. They noted that the HERA provision that allowed projects in rural areas to base tenant income limits on the greater of the area median gross income or the national nonmetropolitan median gross income was one of the most significant HERA provisions for rural housing. In cases where the national nonmetropolitan measure is greater than the local area measure, project owners can set higher rent levels than they would have prior to HERA. This flexibility, in turn, can give project owners access to a broader pool of qualified tenants and increase cash flows from rent, potentially making the projects more attractive to investors. Additionally, according to some industry stakeholders, investor demand for LIHTCs is often weaker in rural areas than in urban areas in part because rural LIHTC projects tend to be smaller in scale. As a result, fixed transaction costs are spread over fewer units, and a few vacancies can have a relatively greater impact on the viability of a small project. Some state officials told us they applied the HERA basis boost to rural areas to help strengthen the financial viability of projects in these locations. For example, Michigan HFA officials said they applied the HERA basis boost to rural areas because rural projects would not have been desirable to investors without it. The LIHTC program is the largest federal program for building and rehabilitating affordable rental housing and provides billions of dollars in tax credits each year. Through HERA, Congress made a number of changes to the program and sought analysis of credit allocations made before and after the act’s implementation. However, limitations in available program data hamper this type of analysis and potentially other research that could be useful to policymakers. HUD is not required to collect data on LIHTC projects and has very limited administrative responsibility for the program, but it has collected some information from HFAs for many years. We commend HUD for taking steps as the lead federal housing agency to collect and disseminate project information. This information has been used to examine important issues, such as the extent to which subsidized housing remains affordable over the long term and the potential for harmonizing requirements across federal housing programs. But, in recent years, the completeness of HUD’s LIHTC Database has worsened, due partly to resource constraints and challenges HUD and HFAs face in meeting new requirements for compiling information on tenants in LIHTC projects. In addition, HUD and its contractor have not followed up on data anomalies that could indicate incomplete reporting. Our work suggests that HUD’s database may be missing many projects that could be captured through additional follow-up efforts. Without improvements in the database, the federal government’s ability to evaluate basic program outcomes—such as how much housing was produced—and other aspects of federal housing policy may suffer. HUD has taken steps to improve its data collection process and faces resource constraints. However, the importance of the LIHTC program to federal housing policy underscores the need for continued attention to data quality and completeness. Therefore, we recommend that the Secretary of Housing and Urban Development (1) evaluate options for improving the completeness of HUD’s LIHTC Database, including following up on data anomalies and enhancing the role of HUD’s contractor in data collection and quality control; and (2) based on this evaluation, take additional steps to improve the data. We provided a draft of this report to HUD, IRS, and Treasury for their review and comment. We received written comments from HUD’s Acting Assistant Secretary for Policy Development and Research that are reprinted in appendix III. We also received technical comments from IRS and Treasury, which we incorporated into the final report where appropriate. In its written comments, HUD agreed with our conclusions and recommendations but expressed concerns about the draft report’s characterization of HUD’s LIHTC Database and data collection efforts. HUD said that our draft report did not adequately explain either the transition HUD was experiencing in its data collection or changes it had made to the collection process. HUD noted, as did our draft report, that while HERA required the agency to compile data on tenants in LIHTC units and authorized $6.1 million for this purpose, Congress did not appropriate these funds. HUD stated that to more cost-effectively collect both the tenant and property data, it merged the two efforts and required HFAs to submit all of the data through a secure web portal in a standardized electronic format. HUD said that it understood that this requirement would entail a multiyear transition for some HFAs, but also noted that in the long run this solution was the most cost-effective way to collect the information. Additionally, HUD said it recognized that its database had suffered from underreporting in recent years but said that until the transition to the new data collection method was completed, its options were either to knowingly underreport properties placed in service or to not release any data for those years. In response to HUD’s comments, we added language to the final report clarifying the connection between resource constraints for the implementation of the tenant data requirement and the completeness of the project data. We also added language describing how HUD had modified its data collection process and its rationale for reporting incomplete data rather than no data. HUD also expressed concern about our use of the word “inaccurate” to describe potential shortcomings in some of the information in the LIHTC Database. HUD said that it would never publicly release information that it thought might be inaccurate and suggested that we substitute “incomplete” for “inaccurate.” Our draft report generally used the word “incomplete” to characterize the information in the LIHTC Database but in three places used the phrase “potentially inaccurate information” to describe cases in which the LIHTC Database showed substantially fewer projects for an HFA than the number we obtained from the HFA directly. We agree that “incomplete” is a more appropriate term and revised the final report to use that word throughout. We are sending copies of this report to interested congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, and the Secretary of Housing and Urban Development. This report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact us at (202) 512-8678 or garciadiazd@gao.gov, or (202) 512-9110 or whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This report discusses (1) how the Internal Revenue Service (IRS) and selected housing finance agencies (HFA) implemented the Housing and Economic Recovery Act of 2008 (HERA) changes to the Low-Income Housing Tax Credit (LIHTC) program, (2) what the Department of Housing and Urban Development’s (HUD) data on LIHTC projects show about the number and characteristics of projects completed from 2006 through 2010 and any data limitations, and (3) the views of program stakeholders about the effects of the HERA changes on these projects. To assess how IRS and selected HFAs implemented HERA changes to the LIHTC program, we reviewed IRS guidance, memorandums, and planning documents. We also interviewed IRS and Department of the Treasury officials. In addition, we interviewed officials from nine HFAs: California, Florida, Massachusetts, Michigan, Minnesota, North Carolina, Oregon, Texas, and Vermont. We selected these HFAs to cover different regions of the country and amounts of tax credit allocations. The selected states are not representative of the entire LIHTC market. For the selected HFAs, we reviewed qualified allocation plans (QAP) that contained detailed selection criteria and application requirements for LIHTCs. To further learn how HERA changes were implemented, we interviewed other industry stakeholders, such as industry associations, investors, syndicators, and housing developers. To examine HUD’s data on LIHTC projects and what these data show about the number and characteristics of LIHTC projects completed from 2006 through 2010, we analyzed information from HUD’s LIHTC Database. HUD collects these data from HFAs and maintains information on LIHTC-financed projects once they are placed in service. We conducted reasonableness checks on the data to identify any missing, erroneous, or outlying figures. We also asked the nine HFAs previously mentioned to check HUD’s numbers of projects placed in service from 2006 through 2010 against their own records, and interviewed HUD about how it and its contractor compiled the data. As discussed in the body of this report, we found that HUD’s data may not contain all LIHTC projects placed in service as of 2010 for several reasons, including (1) challenges states face in implementing new requirements for reporting tenant data and (2) delays between when a project is placed in service and when that information is entered into the state’s data system and reported to HUD. As a result, the number of reported projects placed in service as of 2010 may be understated. We also found that a substantial proportion of projects in the database had missing values for key project characteristics. For this reason, changes in the reported number and characteristics of projects over time should be interpreted with caution. While we acknowledge these limitations, we chose to present the LIHTC data as reported by HUD because they provided the broadest coverage of LIHTC projects placed in service through 2010. We concluded that the data elements we used were sufficiently reliable for describing limitations of the data and presenting the project information HUD had compiled as of July 2012. For each year, we totaled the number of projects placed in service. Due to the limitations of HUD’s data, we supplemented this analysis by examining information from the nine HFAs we contacted to identify any state-level trends. Using the HUD data, we calculated the proportion of projects with certain characteristics, including location type (metropolitan/central city, metropolitan noncentral city, nonmetropolitan), construction type (new construction, acquisition/rehabilitation, both new construction and acquisition/rehabilitation), and the type of tenants targeted (elderly, family, disabled, homeless, other). In addition, because HERA increased the amount of credits allocated to states in 2008 and 2009, we analyzed trends in annual LIHTC allocations from 2006 through 2010 using data collected by the National Council of State Housing Agencies (NCSHA). In order to assess the reliability of the NCSHA data we analyzed, we reviewed documentation and interviewed NCHSA officials about their methods for collecting and reporting the data. We concluded that the NCHSA data were sufficiently reliable for our purposes. To obtain the views of selected HFAs and industry participants about the effects of the HERA changes on LIHTC projects, we interviewed officials from the HFAs and industry stakeholders noted previously. We obtained their views on which HERA changes were most significant, the extent to which the HERA changes helped complete projects that otherwise would not have been feasible, and the extent to which the HERA changes affected the characteristics of projects that received LIHTC allocations. In addition, we reviewed documentation on projects that industry stakeholders said had been affected by the changes. We conducted this performance audit from February through December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 10 summarizes the changes related to the LIHTC program made in the Multi-Family Housing subtitle of HERA. In addition to the contacts named above, Steve Westley and Joanna Stamatiades (Assistant Directors), Emily Chalmers, William Chatlos, Lois Hanshaw, Lawrence Korb, May Lee, John McGrail, Marc Molino, Edward Nannenhorn, Winnie Tsen, and Jason Wildhagen made important contributions to this report. | IRS and state HFAs administer the LIHTC program, the largest source of federal assistance for developing affordable rental housing. HFAs are allocated tax credits on a per capita basis and award them to developers. By acquiring project equity from developers, investors may become eligible for the credits, which offset federal tax liabilities. As part of HERA, Congress made changes to the program that included increasing credits allocated to states, setting a temporary floor on the most common LIHTC rate (the portion of eligible project costs for which a developer can receive credits), and giving HFAs more discretion in "enhancing" (i.e., increasing) awards. HERA also required GAO to study the changes, including the distribution of credit allocations before and after HERA. This report discusses (1) how IRS and selected HFAs implemented the HERA changes, (2) what HUD's data show about the number and characteristics of projects completed from 2006 through 2010 and any data limitations, and (3) stakeholders' views on the effects of the HERA changes on LIHTC projects. GAO reviewed IRS and state guidelines, analyzed HUD data on LIHTC projects, and spoke with federal, state, and industry officials. Federal and state agencies implemented changes made in 2008 to the Low-Income Housing Tax Credit (LIHTC) program by revising program guidance and modifying plans for allocating tax credits. The Internal Revenue Service (IRS) implemented the changes made by the Housing and Economic Recovery Act of 2008 (HERA) by, among other things, issuing notices and revenue procedures. Program stakeholders that GAO contacted said that IRS's actions were generally sufficient. But as of October 2012, IRS and the Department of the Treasury were still working on implementation issues, such as developing guidance on the provision designed to ease restrictions on using tax credits to acquire existing federally or state-assisted buildings. At the state level, housing finance agencies (HFA) implemented the HERA changes by modifying their tax credit allocation plans, which provide criteria for awarding credits. For example, in their plans, some HFAs cited financial need as the only criterion for awarding HERA-created enhanced credits. Others planned to target specific types of projects, such as those using "green building" practices. The Department of Housing and Urban Development (HUD) voluntarily compiles the largest public database on LIHTC projects, but the data it collects from HFAs are incomplete. Despite HUD efforts to improve its data collection process, the database may undercount projects, in part because HUD did not follow up on potentially incomplete information. For example, HUD's database showed that one state had between 23 and 49 completed projects each year from 2006 through 2009, but only 2 projects in 2010. However, officials from this state's HFA provided GAO with documentation showing that they had reported 37 projects for 2010. Further, much of the project data that HUD has received does not include characteristics such as the type of location, construction, and tenants targeted. A HUD official noted that a HERA provision requiring states to collect tenant-level data (e.g., race and income) had made collecting project data more challenging because HUD did not receive additional resources and available resources had to be divided between tenant and project data collection. Without more complete data on the LIHTC program, the federal government's ability to evaluate basic program outcomes--such as how much housing was produced--and overall federal efforts to provide affordable housing may suffer. Data from 42 HFAs that reported each year from 2006 through 2010 provide limited insight into the actual number and characteristics of LIHTC projects. The number of reported projects completed exceeded 5,300, and most were in metropolitan areas and were new construction. However, missing data prevented analysis of trends over the 5-year period. For example, the proportion of missing information on the types of tenants targeted increased from 5 percent in 2006 to 28 percent in 2010. Program stakeholders told GAO that the broad effects of the HERA provisions on the LIHTC market were difficult to determine but noted that certain provisions enhanced the financial feasibility of some individual projects. For example, stakeholders said the temporary increase in per capita credit allocations, temporary credit rate floor, and discretion to use enhanced credits improved the financial viability of some projects by allowing states to award more credits per project. Some state officials also said that the larger awards especially benefited projects in rural areas that can be difficult to finance because they tend to have lower rents and are less attractive to investors than projects in urban areas. GAO recommends that HUD evaluate and implement additional steps to improve its LIHTC Database. HUD agreed with the recommendation but said the report could better describe the agencys efforts to improve data collection despite resource constraints. In response, GAO added further information on HUDs changes to its collection process. |
Given the large influx of Recovery Act funds that LEAs and IHEs are receiving, the Administration has stated its intention to ensure that federal agencies provide information to the public that is transparent and useful. Further, the Act contains numerous provisions to increase transparency and accountability. For example, under section 1512 of the Act, recipients of funds are required to report certain information quarterly. In addition, the Act created the Recovery Board and required it to establish and maintain a user-friendly, publicly available Web site (Recovery.gov) to foster greater accountability and transparency in the use of Recovery Act funds. The Act directs that the Web site function as a gateway to key information relating to the Recovery Act and provide links to other government Web sites with related information. The information that is provided by recipients in accordance with the reporting requirements under section 1512 is made available to the public on Recovery.gov. The Act created broad requirements for recipient reporting. Specifically, the Act requires, among other types of information, that recipients report the total amount of Recovery Act funds received, associated obligations and expenditures, and a detailed list of the projects or activities supported by Recovery Act funds. For each project or activity, the detailed list must include the name and description of the project or activity, an evaluation of its completion status, and an estimate of the number of jobs created and the number of jobs retained through that project or activity. The prime recipient, which for these education programs is the state, is responsible for the reporting of all data required by section 1512 of the Recovery Act. To implement recipient reporting requirements, OMB worked with the Recovery Board to deploy a nationwide system for collecting data submitted by the recipients of funds. One of the functions of the Recovery Board was to establish a Web site and to publish a variety of data, including recipient data once it has been reviewed by the relevant federal agencies. These data, collected through www.FederalReporting.gov, are made available to the public for viewing and downloading on www.Recovery.gov. The Recovery Act set a demanding schedule for implementing Recovery.gov, requiring the Recovery Board to establish the Web site within 30 days of the law’s enactment. The Recovery Board’s goals for this Web site are to promote accountability by providing a platform to analyze Recovery Act data and to serve as a means of detecting fraud, waste, and abuse by providing the public with accurate, user-friendly information. Recipients are required to submit their section 1512 reports within 10 days of the end of each quarter. Federal agencies then review the reports for significant errors and missing information, and as required by law, make them available on Recovery.gov within 30 days of the end of each quarter. For the programs discussed in this report, information was submitted by recipients for the quarter ending March 31, 2010 and posted on Recovery.gov on April 30, 2010. OMB provided recipients guidance through memorandums, supplemental materials, and reporting instructions. Specifically, starting for the period ending September 30, 2009 (and repeated for the quarters ending December 31, and March 31), OMB’s reporting instructions specified that recipients must provide, among other things, the project name, which should be brief and descriptive; a project description that captures the overall purpose of the award, quarterly activities, and expected outputs and outcomes or results; an award description that describes the overall purpose, expected outputs, and outcomes or results of the award, including significant deliverables and, if appropriate, units of measure; a jobs created description that captures the types of jobs created or the project status, which was specified as not started, less than 50 percent complete, completed 50 percent or more, or complete; an activity description, which categorizes projects and activities; the amount of the award; and the primary place of performance, which is the physical location of award activities. Four of these fields—project name, description of jobs created, quarterly activities/project description, and award description—are narrative fields. In its December 2009 guidance to heads of executive departments and agencies, OMB stated that the narrative information must be sufficiently clear to facilitate the general public’s understanding of how Recovery Act funds are being used. As we reported in our May 2010 transparency report, OMB provided guidance that required general information that could be applied broadly across a wide range of recipients. OMB defined narrative fields to solicit high-level information that is not specific to a single program. OMB officials also told us the agency created generic reporting guidance that would provide basic guidance for recipient reporting and that individual agencies could provide supplemental guidance—that was more specific to their program characteristics—if the agency considered additional guidance necessary. Detailed information on how subrecipients are spending their Recovery Act funds is limited, in part because data collection for Recovery.gov, through FederalReporting.gov, does not provide specific narrative fields for collecting information on how each subrecipient is using its funds. Because OMB and Education guidance instructs prime recipients to include information about subrecipients in the information they report on FederalReporting.gov, a state is required to report information that captures the overall purpose of the award, including how subrecipients have used the funds. Information required about each subrecipient is limited to basic information, such as award amounts and place of performance. Our May 2010 report notes that this practice is not consistent with the requirement in the Act to report a detailed list of all projects and activities, each having its own name, description, completion status, and potential outcomes. In addition, we reported that requiring information on status, outcomes, or other items without information on subrecipient activities may convey an incomplete impression of how funds are being used. Furthermore, FederalReporting.gov restricts the amount of information prime recipients can report. Prime recipients are allowed to input up to 4,000 characters for each narrative field. While this limitation may not affect grants that provide funds for limited projects and activities, some states have thousands of subrecipients for each of these three education grants. For example, California’s SFSF grant has over 1,500 subrecipients. Providing detailed information on how each subrecipient is using the funds within the character limitation would be extremely challenging, if not impossible for some states. Because of these complexities, OMB officials allowed individual federal agencies to provide program-specific guidance that was tailored to the awards made under their programs, if the agency determined such guidance was necessary. They noted, however, that while information on subrecipient activities and fund uses may not be specifically included on Recovery.gov, the information included in the prime recipient reports should, as a whole, represent the entire grant, including subrecipient information. They told us that they will continue to evaluate and update guidance on Recovery Act reporting requirements, but that they do not have plans to require more information on subrecipients. The officials emphasized the need to balance transparency with the burden of recipient reporting. Education developed guidance and tip sheets with suggested text for recipients to use when reporting on Recovery Act funds. Education officials reported they provided this information to recipients to ease the burden of Recovery Act reporting. For example, each prime recipient is required to submit information each quarter for over 60 data elements for each Recovery Act grant it receives. Since the Act funded multiple formula grants to states, many were required to submit as many as nine reports totaling up to approximately 540 required data elements. Several state officials told us that including subrecipient information in their reporting required additional resources and time. For example, Colorado officials told us that summarizing information from nearly 300 separate subrecipient reports was their biggest challenge in compiling and reporting on the data required by section 1512. In its tip sheets, Education provided suggested standard language that recipients could use when reporting on three of the four narrative fields. Education officials told us they provided the language for the project name, award description, and quarterly activities/project description fields in order to balance the reporting burden with transparency by providing information on the grants without requiring each recipient to develop its own information. The only narrative field without this language was the description of jobs created. Officials told us that information in the description of jobs created field needed to be individualized for each grant and therefore standard language would not be appropriate for that field. Education’s guidance and tip sheets—including the suggested standard language—were reviewed and approved by OMB. For each of the three programs we reviewed, the standard language for two of the narrative fields in the recipient reports—award description and quarterly activities/project description—is worded almost exactly the same. By using the suggested text for both the award description and quarterly activities/project description narrative fields, recipients duplicate the generic information and lose an opportunity to provide information on how they are using their grant funds. For example, Education’s tip sheet for IDEA Part B instructs recipients to enter “Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA.” While this information does provide the public with a general description of whom the program serves (children with disabilities), and its purpose (providing special education and related services), it does not provide information on what specific activities or programs are being funded by the grant. Therefore when states use the standard language, the public cannot discern if the grant is paying for additional teachers, textbooks, installation of wheelchair accessible ramps, creating a tutoring program, providing professional development, purchasing technology, or any of the activities allowed by IDEA Part B. (See table 1 below for standard language.) We found that 9 percent of the awards for the three programs we reviewed were transparent—that is, they had sufficiently clear and understandable information on the award’s purpose, scope, location, award amount, nature of activities, outcomes, and status of work. We determined that 13 percent contained most, but not all, of this information. However, the majority (78 percent) of descriptions for all three programs we reviewed had limited information—that reduced the public’s ability to understand how the funds were being used—because they primarily relied on Education’s standard language to describe how they spent their Recovery Act funds. We also found that many states and LEAs made information on their grants available to the public through mechanisms other than Recovery.gov. A few of the descriptions (9 percent) fully met our transparency criteria because their project descriptions included information on subrecipient use of funds. To assess the extent to which descriptions of awards transparently described how funds were being used, we developed a transparency assessment based on the Recovery Act; OMB’s guidance, including OMB’s Recipient Reporting Data Model; the Federal Funding Accountability and Transparency Act of 2006; and professional judgment. (See app. VII for additional information on how we developed our transparency assessment.) Similar to our May 2010 transparency review, we identified key fields on Recovery.gov that describe the uses of Recovery Act funds, including project name, award description, and quarterly activities/project description. In addition to these fields, we reviewed the description of jobs created field, in which prime recipients were advised by Education to briefly describe the types of jobs created or retained. In December 2009 we reported in our congressionally mandated bimonthly review of Recovery Act funds that retaining and creating jobs was the primary use of funds by LEAs across the three education programs. In assessing transparency, we reviewed all available prime recipient award records on Recovery.gov as of April 30, 2010, for the three education programs covered in this review. To apply our transparency criteria to award information, we looked for information on the general purpose of the award (e.g., retaining funding for K-12 schools or programs) and the nature of activities being conducted (e.g., purchase of educational technology or training of instructional support staff) in the fields we reviewed on Recovery.gov. We also looked for information on where award activities are being conducted, the amount awarded, the status (percentage complete), what is expected to be achieved (outcomes), and the scope (e.g., number of schools or students covered by the project). Using these seven attributes and our professional judgment, we assessed information in the selected data fields collectively for understandability, clarity, and completeness to determine whether they met our transparency criteria. We did not find any descriptions that did not include at least some of the information needed to inform the public. (See table 2.) States that were able to provide enough detailed information to fully meet our transparency criteria made few or no awards to subrecipients and/or they reported that their subrecipients used Recovery Act funds for a limited purpose, such as teacher retention. For example, Hawaii, which has only one LEA, provided information on its use of Recovery Act ESEA Title I Part A funds that was clear and included sufficient detail for the general public to understand the award’s purpose, scope, location, award amount, nature of activities, outcomes, and status of work (see table 3). Specifically, the description of the award notes that the funds were used for continued support of the state’s Extended Learning Opportunities program, which served 8,018 economically disadvantaged students across 90 campuses statewide. The state also reported on a number of outcomes from its Recovery Act ESEA Title I, Part A fund use, including student improvement over the course of the program, as well as jobs created. Thirteen percent of descriptions by states included most, but not all information needed to allow the public to understand how Recovery Act education funds were being used. For example, Kentucky’s information for its Recovery Act ESEA Title I, Part A award met all elements of our transparency criteria except for outcomes (see table 4). Specifically, Kentucky’s description reported that its LEAs primarily used these funds for job retention across a number of occupational types. However, while the purpose of these funds is clear, “to improve the teaching and learning of targeted low performing students and schools,” it is not clear what specific outcomes had resulted or were expected to result from their fund use (for example, averting staff layoffs, preventing teacher furloughs, or maintaining current class size). Most states (78 percent) only partially met our transparency criteria because their description contained much less information and met only a few attributes of our criteria. For example, Alaska’s description does not provide sufficient information on what project activities were supported and what outcomes resulted from the use of these funds to enable the public to understand how it is using Recovery Act funds (see table 5). While Alaska does provide jobs-related information in terms of the number of jobs created or retained, the information is not clear as to whether or not job creation or retention was the only or primary use of its Recovery Act SFSF education stabilization funds. We found that for all three education programs, descriptions that contained only Education’s suggested standard language were less transparent than those that entered information specific to the program and activities conducted in their states. Education’s reporting guidance provided standard language for the quarterly activities/project description field but it did not contain instructions or guidance for recipients to describe how funds were being used by subrecipients. For example, the suggested language for the ESEA Title I, Part A program instructed states to enter “Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards.” While the use of such language by states may facilitate the process of reporting their section 1512 data (i.e., reduce the reporting burden), it does not provide information on what funds are being spent on (e.g., professional development, technology, or testing assessments) and it provides the public with little information on how funds are being used at the local level. For example, we collected information from seven LEAs in Texas that reported they used ESEA Title I, Part A Recovery Act funds for technology purchases for at-risk students, but the information in Texas’ Recovery Act ESEA Title I, Part A project description contains only the standard language discussed above. Our May 2010 report made several recommendations to OMB with the goal of helping the public gain a better understanding of how Recovery Act funds are being spent. One of those recommendations was that OMB work with executive departments and agencies to ensure that supplemental guidance (like Education's tip sheets) provides for transparent descriptions of funded activities. OMB agreed with these recommendations and reported that it is making plans to address them. All 15 states and the District of Columbia we visited have mechanisms to provide the public with information about uses of award funds. The states reported that the information is available online through, for example, state Recovery Act Web sites or state department of education Web sites. Some states also included information on these Web sites about frequently asked funding questions, subrecipient information, and expenditures to vendors. Other states reported that they had additional mechanisms to make the public aware of their uses of award funds. For example, officials in Arizona reported that they issued press releases about uses of their SFSF education stabilization funds, and Florida officials reported that they provided information to the public during sessions of the state’s legislative committees. In addition, 14 of the 17 LEAs we visited made information available to the public on how they were using their ESEA Title I, Part A, IDEA, Part B for school aged children, and SFSF education stabilization funds. These LEAs used different ways to report this information. The most common means was through their Web sites or those of their state education agencies. Other ways included disseminating information through public meetings. For example, York, Pennsylvania presented expenditure data at school board meetings, the District of Columbia Public Schools held parent forums about the use of Recovery Act IDEA, Part B for school aged children funds, and Springfield, Massachusetts held a public budget presentation. In addition, some LEAs disseminated Recovery Act information through newsletters. For example, Rock Round Independent School District in Texas published a newsletter that included information on the status and implementation of its Recovery Act funds. Education faced an extraordinary task in developing the new SFSF program and significantly expanding funding for ESEA Title I, Part A and IDEA, Part B for school aged children while at the same time trying to ensure that the information recipients report is transparent and useful to the public. The transparency and understandability of descriptions on Recovery.gov are important aspects of the Recovery Act as they provide a key mechanism through which the public can understand how tax dollars are being spent and what is likely to be achieved from these expenditures. However, because descriptive information on how subrecipients are using the funds is not included in the quarterly activities/project description field on Recovery.gov, the public may not be able to clearly discern how Recovery Act education funding is being spent in their state. Still, Education officials noted that requiring states to report this information could impose an undue reporting burden on many states, and may be impossible for states that have high numbers of subrecipents because of the reporting field character limitations built into the recipient reporting system. Guidance on reporting requirements for Recovery Act grants that pass through a prime recipient to a subrecipient should balance the need for transparency with the reporting burden and these system limitations. However, because Education’s suggested standard language for two fields—award description and quarterly activities/project description—is exactly the same, an opportunity for greater transparency is lost if recipients use only this language. Providing more information than offered in Education’s standard language, such as an overview analysis of how localities are spending the funds and the anticipated results, could help the public gain a better understanding of how the funds are being used. In order to provide the public with more useful information on how Recovery Act funds are being used, we recommend that the Secretary of Education, in consultation with OMB, remove the standard language for one field—the quarterly activities/project description field—from its guidance and instruct states to include, to the extent possible, information on how the funds are being used and potential project outcomes or results. Education provided comments on a draft of this report by email and agreed with the information in our draft report and our recommendation. Education noted that it strongly supports efforts to improve the transparency and accountability of federal spending as exemplified by the resources it devoted to executing the reporting process under section 1512 of the Recovery Act. Education reported that it was encouraged by our finding that 100 percent of the Education descriptions we reviewed included at least some of the information needed to meet our criteria for transparency. Education noted that our report clearly describes the challenge states face in providing detailed information on the uses of funds without creating undue burden because programs are primarily executed by local educational agencies (LEAs) and because the current reporting mechanism restricts the amount of information that states can report. Education emphasized, as stated in our report, that it would be extremely burdensome and challenging, if not impossible, for many states to provide detailed information for each LEA. Finally, Education agreed to work toward implementing our recommendation of increasing the transparency of descriptions required by recipient reporting while balancing the reporting burden on states. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Education and interested congressional committees. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 or ashbyc@gao.gov if you have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The State Fiscal Stabilization Fund (SFSF) included approximately $48.6 billion to award to states by formula and up to $5 billion to award to states as competitive grants. The Recovery Act created the SFSF in part to help state and local governments stabilize their budgets by minimizing budgetary cuts in education and other essential government services, such as public safety. Stabilization funds for education distributed under the Recovery Act must first be used to alleviate shortfalls in state support for education to local educational agencies (LEA) and public institutions of higher education (IHE). States must use 81.8 percent of their SFSF formula grant funds to support education (these funds are referred to as education stabilization funds) and must use the remaining 18.2 percent for public safety and other government services, which may include education (these funds are referred to as government services funds). After maintaining state support for education at fiscal year 2006 levels, states must use education stabilization funds to restore state funding to the greater of fiscal year 2008 or 2009 levels for state support to LEAs and public IHEs. When distributing these funds to LEAs, states must use their primary education funding formula, but they can determine how to allocate funds to public IHEs. In general, LEAs maintain broad discretion in how they can use education stabilization funds, but states have some ability to direct IHEs in how to use these funds. We assessed the transparency of descriptive information for SFSF awards available on Recovery.gov. We found that 18 percent met our transparency criteria, 12 percent significantly met our criteria, 69 percent partially met our criteria, and zero percent did not meet our criteria. Given that few descriptions met our transparency criteria we conducted a national survey of school districts to discover how they are using the funds. The information on SFSF is found in appendix IV. The following award descriptions contained sufficient information on general purpose, scope and nature of activities, location, and expected outcomes to meet our transparency criteria. The award description information is taken directly from Recovery.gov. We did not edit it in any way, such as to correct typographical or grammatical errors. EXECUTIVE OFFICE OF THE STATE OF COLORADO State Fiscal Stabalization Fund-Education Grants, Recovery Funds Education Fund - for the support of public elementary , secondary, postsecondary and, as applicable, early childhood education programs and services. The State of Colorado awarded public Institutes of Higher Education Stabilization dollars for Fiscal Years 2008-2009 and 2009-2010 in order to maintain the State's financial support to public education. Currently, the Institutes of Higher Education have sought reimbursement for over 50% of the currently awarded funds. As stipulated by the U.S. Department of Education, State Fiscal Stabilization Funds were primarily utilized to provide support for salaries and benefits related to the classroom and laboratory instruction, student services and administrative support within the Colorado public university system. As such, the majority of the positions covered related to the Professorial job series as well as Graduate Teaching Assistants. Other positions supported included accountants, administrative assistants, general professionals, IT support staff, as well as college and central level administrators. Place of performance (city, state, zip code) Denver, Colorado 802031792 More than 50% Completed State Fiscal Stabilization Fund-Education Fund $823661223 Education Fund-for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. No activity this quarter. Funds expended in calendar year 2009 were used to restore state funding levels for LEAs in accordance with the submitted state plan. Distributions for IHEs planned in future quarters. No funds expended this quarter. Place of performance (city, state, zip code) More than 50% Completed STATE FISCAL STABILIZATION FUND-EDUCATION GRANTS, RECOVERY FUNDS Education Fund- for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. Funds are being used to support K-12 and post-secondary education throughout the Commonwealth of Kentucky. We have established Memorandum of Agreements (MOAs) with ten sub-recipients. The sub-recipients are the Kentucky Department of Education (KDE) and the 9 public universities in KY. We expect KDE to interface with the school districts across the Commonwealth for K-12. They collect financial information and job creation data and report that to our office in the Finance & Administration Cabinet. The universities report similar data to our office. We review that data and file the required 1512 reports. As reported on the sub-recipient tab of this report, all of the sub-recipients have incurred expenses and received reimbursement through ARRA funds. Local educational agencies (LEA) primarily used the funds to retain certified and classified positions in their school districts such as: elementary, middle and high school teachers, alternative school teachers, elementary, middle, and high school counselors, nurses, elementary, middle and high school librarians, math and science teachers, curriculum coordinators, technology coordinators, clerical staff, elementary, middle and high school resources teachers, speech language pathologists, arts and humanities teachers, instructional assistants, full-day kindergarten teachers, preschool program positions, and district coordinators. The retained positions allowed LEAs to maintain the same level of staff support as from the previous year. Also, two of the nine public universities that are sub- recipients used ARRA funds to pay the salaries of some full-time faculty. Place of performance (city, state, zip code) Frankfort, Kentucky 406013410 More than 50% Completed TREASURY, LOUISIANA DEPARTMENT OF THE State Fiscal Stabilization Fund-Education Fund The grant is used for creating and/or retaining educational jobs and programs by supporting staff salaries for teachers, faculty, professors, professional, and support employees in higher education and public elementary secondary and postsecondary educati The grant provides support of institutes of higher education, public elementary secondary and postsecondary education, and, as applicable, early childhood education programs and services to continue educating the citizens of the state. The majority of the jobs retained and/or created are instructional jobs (teachers, faculty, and professors). Other jobs created are for pupil support, operational support, school administration, and clerical or service worker related. Retaining educational jobs during an economic downturn ensures the continued education of the youth in the state. Education is a major economic driver and vital for the success of the state's and country's economy. Place of performance (city, state, zip code) Baton Rouge, Louisiana 708025243 More than 50% Completed State Fiscal Stabilization Fund - Education Fund State Fiscal Stabilization Fund - Education Fund: For the support of public elementary, secondary, and postsecondary education and as applicable, early childhood education programs and services. Place of performance (city, state, zip code) Helena, Montana 596200801 More than 50% Completed NORTH DAKOTA, STATE OF State Fiscal Stabilization - Education Fund Education Fund - for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. North Dakota used all education stabilization funds to restore state support for elementary and secondary education to the FY 2009 level freeing up state funds for other one-time school district infrastructure investments. North Dakota distributed ARRA education stabilization funds through the state's school aid funding formula. All school districts agreed to apply the share of the state school aid formula funding identified as federal ARRA funds to instructional salaries. Instructional staff are hired for a 'definite term with salaries paid out of Recovery Act funds and the remaining portion with non-Recovery Act funds. Using the guidance provided in M-10-08 (Part2.5.8), the 'Number of jobs* reporting uses an alternative calculation in which an adjustment is made to the FTE number to match the appropriate percentage of Recovery funding. The 'Number of jobs* calculation is for the entire project and will be used for each reporting quarter. Place of performance (city, state, zip code) Bismarck, North Dakota 585050001 More than 50% Completed 175 STATE OF OKLAHOMA, THE State Fiscal Stabilization Fund - Education Fund. Education Fund- for the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. Education Budget Stabilization - Budget stabilization grant funds were used to supplement state appropriations and other revenues used for the payment of public schools' and higher education institutions' payroll costs. Funds were used pay a portion of the monthly payrolls at numerous public schools at both the common education and higher education levels. Funded a portion of public schools' and higher education institutions' FTEs by offsetting a portion of the current year budget reduction. Place of performance (city, state, zip code) Oklahoma City, Oklahoma 731054801 More than 50% Completed EXECUTIVE OFFICE OF THE STATE OF UTAH State Fiscal Stabilization Fund: Education Fund Education Fund- for the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. Retain 1,259.69 full-time-equivalent administrative, support and faculty positions within Utah's Higher Education System; 110.7 full-time-equivalent administrative and faculty positions within Utah's Applied Technology College; and 1,717.77 full-time-equivalent teaching positions within Utah's Local Education Agencies in order to maintain quality education programs and student support services within Utah's education system. Instructional, teaching and administrative positions for Local Education Agencies (1,717.77 FTEs), Higher Education Institutions 1,259.69 FTEs) and Applied Technology Colleges (110.7 FTEs) within the State of Utah. Place of performance (city, state, zip code) Salt Lake City, Utah 841142210 More than 50% Completed 130 EXECUTIVE OFFICE OF THE STATE OF WYOMING State Fiscal Stabilization Fund - Education Stimulus Phase 1 of the State Fiscal Stabilization Fund-Education Grant program, as amended, allocates stabilization funds to the University of Wyoming and the state�s seven community colleges. Specifically, funding will be used by IHEs for education and general expenditures, in such a way as to mitigate the need to raise tuition and fees, and for modernization, renovation or repair of facilities primarily used for instruction or research. Wyoming's amended Phase II application, which is waiting approval, reduced the amount designated for educational purposes from $67,507,805 to $57,568,071. At March 31, 2010, the remaining balance of the Education Grant funding, $10,052,126, has not been allocated by the Governor. State Fiscal Stabilization Fund - Education Grant program, as amended, provides funding to the State's IHEs: the University of Wyoming and the state's seven community colleges. Allocated funding will be expended during FY 2011. The Governor's office has finalized agreements for renovation, modernization or repair of facilities and general education operations funding which outline the special ARRA contracting provisions, reporting requirements and limitations on qualifying expenditures. The state's IHEs executed agreements for renovation, modernization or repair of facilities with the Governor on February 24th and March 3rd, 2010. It is anticipated that the IHEs' general education operations agreements will be signed by April 2010. The balance of Wyoming's SFSF - Educaton Grant funding, $10,052,126, has not been allocated by the Governor. At March 31, 2010, the State is waiting approval of the SFSF Phase II application and has not expended any portion of the education related SFSF resources. As a result, there are no activities currently funded by SFSF- Education Grant resources. Stabilization dollars will be used to fund top educational priorities for which a shortfall exists, i.e., library acquisitions and instructional excellence. Instructional excellence would cover general education costs such as support budgets and student lab equipment. A large amount of these funds will be used for removation, modernization, or repair of facilities dedicated for instruction and research. It is anticipated that a substantial number of jobs would be created or retained through this renovation effort. The following award descriptions contained most but not all details on one or more of the following pieces of information necessary to facilitate general understanding of the award, based on our criteria: general purpose, scope and nature of activities, location, or expected outcomes. The award description information is taken directly from Recovery.gov. We did not edit it in any way, such as to correct typographical or grammatical errors. State Fiscal Stabilization Funds - Education Grants, Recovery Funds Education Fund - For the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. Support elementary, secondary and postsecondary, and early childhood education programs; Hire and retain teachers and reduce potential layoffs; cover budget shortfalls or gaps in state's budgets and restore funding cuts to programs; improve student achievement through school improvement and reform; make progress toward rigorous college-and career-ready standards, including English Language for Learners (referred to hereafter as ELL) and Individuals with Disabilities Education Act (referred to hereafter as IDEA); establish Pre-K to College and Career Data Systems; make improvements in teacher effectiveness and equitable distribution of qualified teachers; provide intensive support and effective interventions for the lowest performing schools. Instructors/faculty, EMT Program Coordinator, Librarian, Website Coordinator, Associate Director of Planning and Research, Administrative Staff, Janitorial Staff, Coach, Security Staff, Child Development Coordinator, Principals, Certified School Personnel, School Support Personnel, and Professors. Less Than 50% Completed OFFICE OF THE GOVENOR, ARIZONA OFFICE OF ECONOMIC RECOVERY, THE State Fiscal Stabilization Fund - Education Grant Funds Education Fund – for the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. In previous quarters this funding was used to create or save education jobs at K-12, Community College, and Higher Educational institutions. The timing of these disbursements are such that no payments were made during this reporting period and thus there were no programmatic activities this quarter. Further, on October 29, 2009 the State of Arizona’s amendment to the Statewide Cost Allocation Plan (SWCAP) was approved by the Department of Health & Human Services Division of Cost Allocation. The approved amendment granted the State of Arizona the ability to charge the estimated ARRA administrative costs for the period beginning February 17, 2009 through June 30, 2013. A portion of this agreement’s share of the SWCAP expenses was drawn down and expended during this quarterly reporting period and thus this activity is captured in the financial transactions in this report. Jobs and quarterly activities may appear disproportionate to the overall funds drawn down and expended due to this SWCAP reconciliation. In previous quarters this funding was used to create or save education jobs at K-12, Community College, and Higher Educational institutions. The timing of these disbursements are such that no payments were made during this reporting period and thus no jobs were created or saved this quarter. Further, on October 29, 2009 the State of Arizona’s amendment to the Statewide Cost Allocation Plan (SWCAP) was approved by the Department of Health & Human Services Division of Cost Allocation. The approved amendment granted the State of Arizona the ability to charge the estimated ARRA administrative costs for the period beginning February 17, 2009 through June 30, 2013. A portion of this agreement’s share of the SWCAP expenses was drawn down and expended during this quarterly reporting period and thus this activity is captured in the financial transactions in this report. Jobs and quarterly activities may appear disproportionate to the overall funds drawn down and expended due to this SWCAP reconciliation. Place of performance (city, state, zip code) Phoenix, Arizona 850072812 More than 50% Completed EXECUTIVE OFFICE OF THE STATE OF ARKANSAS State Fiscal Stabilization Fund - Education Grants Education Fund-for the support of public elementary, secondary, past secondary education, and, as applicable, early childhood education programs and services. Place of performance (city, state, zip code) Little Rock, Arkansas 722010000 Less Than 50% Completed PLANNING AND RESEARCH, GOVERNOR'S OFFICE OF State Fiscal Stabilization Fund - Education Fund SFSF-Education Fund - for the support of public elementary, secondary and postsecondary education, and, as applicable, early childhood education programs and services. SFSF - Education State Grants Recovery Act funds were provided to help stabilize State and local budgets in order to mitigate and avoid reductions in education and other essential services in exchange for a State’s commitment to advance essential education reform in four areas: (1) making improvements in teacher effectiveness and in the equitable distribution of qualified teachers for all students, particularly students who are most in need; (2) establishing pre-K-to-college-and-career data systems that track progress and foster continuous improvement; (3) making progress toward rigorous college- and career-ready standards and high-quality assessments that are valid and reliable for all students, including limited English proficient students and students with disabilities; and (4) providing targeted, intensive support and effective interventions for the lowest-performing schools. Local Education Agencies were able to use funds for activities previously authorized in various federal education acts. Possible uses of the funds may include using them to avert layoffs of teachers and other personnel; furthering education reform in the key areas of teacher quality, standards and assessments; using longitudinal data to improve instruction; and supporting struggling schools. With respect to postsecondary, the University of California used ARRA funds to retain the University's state-funded workforce responsible for core operations - teaching, research and public service. The California State University used ARRA funds to retain positions in the areas of instruction, academic support, student services, institutional support, and public services. The California Community Colleges used ARRA funds for workforce salaries and academic and operating expenses at its local college campuses. 35323.480000000003 Jobs created or retained include 3547.13 classified jobs, 16139.08 certificated jobs, 286.16 vendor jobs, and 15351.11 IHE jobs. Classified jobs include non-teaching positions such as food service, bus drivers, teacher assistants, custodians, office staff, librarians, and instructional aides. Certificated jobs include teaching positions. Vendor jobs represent a variety of different types of jobs. With respect to postsecondary, a total of 15351.11 FTE were funded using ARRA funds as calculated using the OMB 'definite term guidance. The positions funded at the University of California (UC) include 27.5% (an estimated 9,617.3 FTE) of the UC’s state-funded workforce responsible for core operations: teaching, research and public service. CSU used funds to retain 5,254 FTE positions in the areas of instruction, academic support, student services, institutional support, and public services. The California Community Colleges (CCC) distributed funds to its 72 local campuses to be used for campus expenses, including workforce payroll, instructional materials, and operating costs, specifically funding 479.81 FTE jobs. More than 50% Completed ADMINISTRATION, NEVADA DEPARTMENT OF State Fiscal Stabilization Fund - Education Fund Education Fund for the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services Support of public postsecondary education. Includes 2 Universities, 1 State College and 4 Community colleges in Las Vegas, Reno/ Carson City and rural Nevada. Expenditures supported include salary and benefits for instructional and support positions as well as related expenses. Using the methodology outlined in M-10-08, released 12/18/2009, the Nevada System of Higher Education calculated the number of jobs retained (none were created) as 'one in which the wages or salaries are either paid for or will be reimbursed with Recovery Act funding.' (section 5.2). It should be noted that state fiscal stabilization funds account for approximately 28% of the operating budgets of 7 institutions; however the allocation of stabilization funds/ fund maps within those budgets was made July 1, 2009 and revised through the year for accounting purposes only and does not reflect Board of Regents or Legislative priorities on what positions would have been eliminated or what other cuts would have been made had these funds not been available. The number of jobs retained presented here only reflects positions that were paid for with recovery act dollars this quarter and should not be interpreted as more than a financial accounting. NEW YORK, STATE OF State Fiscal Stabilization Fund - Education Fund For the support of public elementary, secondary, and post secondary education and, as, applicable early childhood education programs and services For the support of public elementary, secondary, and post secondary education and, as, applicable early childhood education programs and services New York State primarily used the ARRA State Fiscal Stabilization Fund (SFSF) to restore proposed cuts in school aid compared to earlier levels caused by the severe economic recession effect on State tax revenues. Public school districts were eligible for the Education Stabilization Fund (ESF) portion of the State Fiscal Stabilization Fund. The following award descriptions did not contain sufficient details on one or more of the following pieces of information necessary to facilitate general understanding of the award, based on our criteria: general purpose, scope and nature of activities, location, or expected outcomes. The award description information is taken directly from Recovery.gov. We did not edit it in any way, such as to correct typographical or grammatical errors. Place of performance (city, state, zip code) State Fiscal Stabilization Fund -- Education Fund. Education Fund- for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. For the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. For Central Administration staff, 0.13 jobs created and 32.65 jobs retained. For Teachers/Instructors/Department Heads staff, 51.05 jobs created and 2537.36 jobs retained. For Paraprofessionals staff, 3.50 jobs created and 143.87 jobs retained. For Clerical Support staff, 0.45 jobs created and 12.75 jobs retained. For Guidance Counselors staff, 0.83 jobs created and 18.55 jobs retained. For School Nurse/Health Services staff, 0.00 jobs created and 3.00 jobs retained. For Maintenance Personnel staff, 1.00 jobs created and 24.95 jobs retained. For Technical/Computer Specialists staff, 0.30 jobs created and 5.00 jobs retained. For Library/Media staff, 0.00 jobs created and 13.48 jobs retained. For Food Services staff, 0.50 jobs created and 0.00 jobs retained. For Athletics/Coaches staff, 0.00 jobs created and 0.50 jobs retained. For Class Advisors staff, 0.00 jobs created and 0.50 jobs retained. For All Outside Consultants and Vendors except for RESCs and SERC staff, 1.00 jobs created and 2.00 jobs retained. For the current fiscal year, SFSF comprises 14.26 percent of the Education Cost Sharing (ECS) grant, Connecticut's major education funding mechanism (9.19 percent from the Education State Grants and 5.07 percent from Government Services). Place of performance (city, state, zip code) HARTFORD, Connecticut 061061659 Less Than 50% Completed State Fiscal Stabalization Fund-Education Grants, Recovery Funds Education Fund- for the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services For the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. Place of performance (city, state, zip code) Dover, Delaware 199010000 Less Than 50% Completed DISTRICT OF COLUMBIA, GOVERNMENT OF SFSF: Education Stabilization Fund Education Fund- for the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. Funds are used for the support of public elementary, secondary, and higher education, and, as applicable, early childhood education programs and services. These funds are used to help restore for FY 2009, 2010, and 2011 support for public elementary, secondary, and postsecondary education to the greater of the FY 2008 or FY 2009 level. The funds needed to restore support for elementary and secondary education are run through the state's primary elementary and secondary education funding formulae. The funds for higher education go to the University of DC. All reported jobs are for instructional, support services, and administrative positions within District of Columbia school districts. EXECUTIVE OFFICE OF THE GOVERNOR OF FLORIDA State Fiscal Stabilization Fund - Education Fund Education Fund - for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. Education Fund - for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. The majority of the jobs saved and created related to instruction or instructional support. Types of jobs included but were not limited to adjunct faculty, faculty, classroom teachers, school-based administrators, clerical personnel, instructional aides, librarians/media specialists, career specialists, supervisors, and paraprofessionals. Place of performance (city, state, zip code) Tallahassee, Florida 323990400 Less Than 50% Completed State Fiscal Stabilization Fund â “ Education Fund Education Fund- for the support of public elementary, secondary, and postsecondary education, as, applicable, early childhood education program and services. For the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. Place of performance (city, state, zip code) Atlanta, Georgia 303341600 More than 50% Completed EXECUTIVE OFFICE OF THE STATE OF HAWAII State Fiscal Stabilization Fund - Education Fund Education Fund - for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. For support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. State Fiscal Stabilization Funds, Education funds were used to retain teachers, counselors,lecturers, teaching faculty, and support staff positions necessary to support the State's public elementary, secondary, and postsecondary education programs. For the State's public traditional schools, preference is for positions requiring a teaching license that is assigned to a classroom and/or carry out an instructional role. Place of performance (city, state, zip code) Honolulu, Hawaii 968132407 More than 50% Completed EXECUTIVE OFFICE OF THE STATE OF IDAHO State Fiscal Stabilization Fund-Educational Grants, Recovery Funds State Fiscal Stabilization Fund - Educationl Grants, Recovery Funds for higher educatin and support of public elementary and secondary education (K-12) programs and services. K-12 Education Fund for the support of public elementary and secondary education programs and services. Higher Education to maintain publicly supported education opportunities in the state. Higher Education retained faculty, administrative and infrastructure support staff. K-12 66.61% Teacher/Teacher Aides, 8.79% Custodial/Maintenance, 8.44% School Administraitive/Office Support, 5.76% Special Education Services, 2.75% District Administrative/Office Support, 2.40% Student Transportation, 1.65% Guidance/Health Workers, 1.51% Alternative School Programs, 0.98% Information Technology Workers, 0.64% Education Media Workers, 0.44% Extracurricular Program Workers, and 0.03% Child Nutrition Workers. Place of performance (city, state, zip code) Boise, Idaho 837200034 More than 50% Completed 150 . State Fiscal Stabilization Fund - Education Fund Education Fund - for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. For the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. Management Occupations, Computer and Mathematical Occupations, Life, Physical, and Social Science Occupations, Community and Social Service Occupations, Education, Training and Library Occupations, Health Practitioners and Technical Occupations, Protective Service Occupations, Food Preparation and Service Related Occupations, Building and Grounds Cleaning and Maintenance Occupations, Personal Care and Service Occupations, Sales and Related Occupations, Office and Administrative Support Occupations, Construction and Extraction Occupations, Installation, Maintenance and Repair Occupations, Production Occupations, Transportation and Material Moving Occupations. Place of performance (city, state, zip code) Springfield, Illinois 627770002 More than 50% Completed EXECUTIVE OFFICE OF THE STATE OF IOWA Fiscal Stabilization Fund - Education Education Fund- for the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. Place of performance (city, state, zip code) Des Moines, Iowa 503190000 More than 50% Completed State Fiscal Stabilization Fund - Education Fund Education Fund - for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. For the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. Place of performance (city, state, zip code) Topeka, Kansas 666121590 More than 50% Completed State Fiscal Stabilization Fund - Education grants, Recovery Funds Education Fund - for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. Education Fund - for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. 149.80 FTE jobs were created or retained as a result of the ARRA funds for K - 12 public education; 14 limited period teachers, 1 limited period SRO, 126.8 teachers, 1 Librarians, 2 School Resource Officers, and 4 educational technicians and 1 support person. Higher Ed Total jobs 5.7 FTE are CMCC; Jalbert Hall Renovations 3 hours, CMCC; Parking Lot 82 hours, SMCC; Roofing Repairs 727.75 hours, SMCC; Heating Improvements 850.75 hours, SMCC; Auto Tech Envelope Repair 144.50 hours, SMCC; Museum & Storage Renovations 602.75 hours, SMCC; SEA Center 9.5 hours, SMCC; Salt Shed 9 hours WCCC; Residence Hall Renovations 319 hours WCCC; Harol Howland Building Renovations 45 hours, YCCC; Phone Modernization 87 hours YCCC; Rooftop HVAC Unit Replacement 35.5 hours, YCCC; Emergency Generator Replacement 27 hours, Total; 2,942.75 hours/520 hours=5.7 FTE. For the University of Maine System jobs, 50.09 FTE jobs were funded with State Fiscal Stabilization Funds. For narrative -- 39.43 FTE were faculty and 10.66 were students. State Fiscal Stabilization Fund - Education Fund Education Fund - for the support of public elementary, secondary, past secondary education, and, as applicable, early childhood education programs and services. Education Fund - for the support of public elementary, secondary, past secondary education, and, as applicable, early childhood education programs and services. Teaching positions (full time, substitute and tutors). Place of performance (city, state, zip code) Baltimore, Maryland 212012595 Less Than 50% Completed State Fiscal Stabilization Fund - Education Stabilization Fund Education Fund - for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. For the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. Commercial and Institutional Building Construction These funds have supported administrators, teachers, paraprofessionals, and staff members in school districts across Massachusetts. In addition, these funds have supported administrators, faculty members, and staff members at the state and community colleges and the University of Massachusetts campuses. Place of performance (city, state, zip code) BOSTON, Massachusetts 021331099 More than 50% Completed 483 STATE OF MICHIGAN, EXECUTIVE OFFICE OF THE State Fiscal Stabilization Fund-Education Fund Education Fund-for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. For the support of public elementary, secondary, and postsecondary education, and, as applicable, early childhood education programs and services. State Fiscal Stabilization Fund (SFSF) sub-recipients created and retained jobs in several categories. The majority of sub-recipients retained jobs, indicating that they would have had to lay off the positions that were retained by the use of SFSF monies. The following jobs categories apply to the positions that were created and/or retained: K-12 Teachers in the following subject areas - Language Arts, Science, Math, Physical Education, Social Studies, Art, Music, Drama, Spanish, Computer Technology, English as a Second Language, Business Management, Reading Recovery, English, Home Economics, Chemistry, Physics, Economics, Government, U.S. History, World Languages, and General Education; Supplemental Enrichment Instructors; Paraprofessionals; Bus Drivers; Custodians; Mechanics; Administrative Professionals; School Librarians; School Counselors; Recess Aides; Library Aides; Social Workers; Nurses; Hall Monitors; Athletic Directors; Media Specialists; Literacy Coaches; Cooks; Technology Assistants; Principals; School Administrators; Support Staff; Assistant Principals; and College Work Study Student Positions. Place of performance (city, state, zip code) Lansing, Michigan 489330000 More than 50% Completed EXECUTIVE OFFICE OF THE STATE OF MINNESOTA State Fiscal Stabilization Fund - Education Fund Education Fund - for the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. Education Fund - for the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. 4667.2399999999998 Types of jobs created or retained with this grant include administration/supervision, counselor, educational speech/language pathologist, licensed instructional support, non- instructional support, non-licensed classroom personnel, non-licensed instructional support, other, paraprofessional, school psychologist, school nurse, security specialist, social worker, substitute teacher salaries, teachers, and cultural liaison. Types of jobs created or retained in higher education include, professors, instructional lab assistants, administrative support, graduate instructors, teaching specialists, adjunct instructors, lecturers, research and teaching assistants, assistant scientists, personnel specialists, clinical specialists and information technology professionals. Place of performance (city, state, zip code) St. Paul, Minnesota 551551606 More than 50% Completed EXECUTIVE OFFICE OF THE STATE OF MISSISSIPPI State Fiscal Stabilization Fund - Education Grants For the support of elementary, secondary and postsecondary education and, as applicable, early childhood education programs and services and local educational agencies in the state of Mississippi. Provided support for Local Education Agencies; teacher salaries and Institutions of Higher Education faculty salaries, operating costs and student financial aid. Classroom teachers, assistant teachers, lobrarians, guidance couselors, school administrators. All LEA's used ARRA SFSF to reimburse salary expenditures for district personnel. Place of performance (city, state, zip code) Place of performance (city, state, zip code) Lincoln, Nebraska 685094987 Less Than 50% Completed EXECUTIVE OFFICE OF THE STATE OF NEW HAMPSHIRE State Fiscal Stabilization Fund (SFSF) Education State Grants, Recovery Act Education Fund – for the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. No funds paid for jobs during this reporting period. These funds paid for jobs between 7/1/09 and 9/30/09. Using the definite term methodology, 510.23 jobs were created / retained in Q1 2010. Position types include teachers, support staff at School Administrative Units across the states, as well as at the University of New Hampshire system. Place of performance (city, state, zip code) Concord, New Hampshire 033016312 More than 50% Completed NEW JERSEY, STATE OF State Fiscal Stabilization Fund-Education Fund Education Fund for the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. For the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. With regard to K-12 education, the following were the types of jobs created or retained: instructional positions, student support positions, and administrative positions. With regard to higher education, the following were the types of jobs created or retained: full-time faculty, administrative/staff positions, clerical positions, part-time faculty/adjunct custodians, police/security, and teaching assistants. Place of performance (city, state, zip code) Trenton, New Jersey 086250001 More than 50% Completed SECRETARY OF STATE, NEW MEXICO STATE FISCAL STABILIZATION FUND - EDUCATION GRANTS, RECOVERY FUNDS Education Fund - for the support of public elementary, secondary, and postsecondary education and as applicable, early childhood education programs and services. Place of performance (city, state, zip code) SANTA FE, New Mexico 875012744 Less Than 50% Completed State Fiscal Stabilization Fund - Education Fund Education Fund- for the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. For the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. For the LEAs and Charter Schools these are the following job types: Teachers, Teacher Assistants, Assistant Principals, Instructional Support, Clerical Personnel, Custodians, and Transportation Personnel. For the Universities the job type was: Instructional Faculty. Place of performance (city, state, zip code) Raleigh, North Carolina 276038001 Less Than 50% Completed EXECUTIVE OFFICE STATE OF OHIO State Fiscal Stabilization Fund - Education Fund Education Fund- for the support of public, elementary, secondary and post-secondary education and, as applicable, early childhood education programs and services. For the support of public, elementary, secondary and post-secondary education and, as applicable, early childhood education programs and services. 8465.1499999999996 Elementary and Secondary Education: Teachers, school administrators, school counselors, librarians, lunchroom personnel, school bus drivers, technology coordinators, secretaries, educational aides, tutors, construction and renovation jobs. Higher education institutions retained professional and support staff in the following functional areas of a campus budget: instructional staff; academic support staff; student services staff; institutional support staff; and plant operations and maintenance staff. No infrastructure funds were used for higher education. Place of performance (city, state, zip code) Columbus, Ohio 432154183 Less Than 50% Completed State Fiscal Stabilization Fund-Education Fund Education Fund -- for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. For the support of pubic elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. Teachers, instructional aides, and professors for Oregon public K-12 and university educational institutions. More than 50% Completed EXECUTIVE OFFICE OF THE COMMONWEALTH OF PENNSYLVANIA STATE FISCAL STABILIZATION FUND - ED GRANTS, RECOVERY FUNDS Education Fund – for the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. For the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. 7875.5600000000004 Reflects sub-recipient submitted information on school administrators, teachers, student aids and other educational support staff providing services detailed in the Project Description for the current reporting quarter for this award. Place of performance (city, state, zip code) Harrisburg, Pennsylvania 171012210 Less Than 50% Completed State Fiscal Stabilization Fund - Education Fund Education Fund - for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. For the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. Principal, Assistant Principal, Preschool Teacher, Kindergarten Teacher, Special Education (Self Contained), Special Education teachers, Classroom Teacher, Media Specialist, Guidance Counselor, Other Professional Instruction-Oriented, Adult Education Supervisor/Teacher, Temporary Instruction Oriented Staff, Bookkeeper, Technology/IT Personnel, Professional Development Director, Director of Technology, Coordinator, Federal Projects, Nurse, Director, Attendance, Other Nonprofessional Staff, Assistance Superintendent, District Superintendent, Supervisor Secondary Education, Director, Career and Technology Education, Special Services Coordinator, Guidance Coordinator, Support Personnel, Library Aide, Kindergarden Aide, Special Education Aide, Instructional Aide, Director, Communication/PIO, Instructional Coach, Other District Office Staff, School-to-Work coordinator, Social Worker, Director of Student Services, Purchased-Service Teacher, School Resource Officers, Bus Driver, Custodian, Secretary, Certification Specialist, Clerical Assistant/Administration, Data Specialist, SASI Clerk, Attendance Clerk, Parent Educator, Coordinator-REAL Project, Security Monitor, Academic Success Tutors, Accountant/fiscal Analyst I, Admin Asst/Coord, Administrators, Assistant Professor, Associate Professor, Campus Ambassadors, Cashier, Cliniacal Assistant Professor, Community Intern Director, Community Interns, Curriculum Coordinator I, Custodian, Development/Alumni, Executive, Facilities Worker, Faculty, Graduate Staff Assistants, Grants Administration, Groundskeeper, Human Resources Staff, Information Technology, Instructors, International Recruitment Mgr, Laboratory Manager, Law Enforcement Officer I, Librarians, Mail Room Clerk, Master Instructor/Trainer, Multicultural Outreach Coord., Professor, Program Coordinator II, Receptionist/Admin Asst., Records/Info Resource Asst., Regional Admissions Associates, Research Assistant Professor, Sponsored Award Management, Student Svcs Prog Coord II, Support, Visitors Center Staff, Administration; Administration Specialist; Bookstore Specialist; Cashier; Counselor; Foundation Associate; Job Developer; Procurement Officer; Coordinator; Adjunct Instructor; Business Instructor; Math Instructor; Transitional Studies Columbia, South Carolina 292112267 Less Than 50% Completed SOUTH DAKOTA, STATE OF State Fiscal Stabilization Fund - Education Fund Education Fund - for the support of public elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. For the support of public elementary, secondary, and postsecondary education and as applicable, early childhood education programs and services. Colleges, Universities, and Professional Schools Staff to provide the opportunity for maximum citizen access to appropriate, high quality collegiate and university degree programs. Pierre, South Dakota 575015007 More than 50% Completed EXECUTIVE OFFICE OF THE STATE OF TENNESSEE State Fiscal Stabilization Fund - Education Fund The SFSF Education Fund helps states restore support for public elementary, secondary, and postsecondary education and, as applicable, early childhood programs and services. Activities conducted under the SFSF Education Fund include maintaining educational, administrative, clerical, support, professional, teaching and other positions essential to the delivery of public education in Tennessee's K-12 and higher education systems. 4706.8199999999997 TDOE: Teachers, K-12 Higher Education: Support staff, Professional support staff, Professional support temporary, Student workers, Adjunct faculty, Overload faculty, Accountant, Instructor of Engineering, Lecturers, Professors, Student assistant, Administrative staff, Graduate Assistants, Graduate Teaching Assistants, Instructors, IT technicians, Director, Extension agents, Post retirement appointments, Coordinator, IT Administrator, Graduate Research Assistants, Service Aides, Research Associates, Research Technician, Clerical positions, Professional positions, Academic Faculty positions, Technology, Foundations Instructor, Counselors, Part-time Instructors, Receptionists, PT Faculty Welding Instructor, PT Dental Assistant Instructor, Secretaries, Federal Work-Study Positions, Admissions Office clerical, Asst Dir of Fin Aid, Clinical Assistant, Custodians, Director, Executive Aides, Financial Aid, Counselor, Financial Management Analyst, Forensic Tech, Full Time Adjuncts, GME Coordinator, Graduate Program Specialist, Info Res Tech, Int Med/Psych, Internal medicine, Lab Coordinator, Lecturers, Manager, OB/GYNs, Office Coordinator, Post Doc, Psychiatry, Research Specialist, Technical Clerk, Hourly Temps, Visiting Assistant Professors, Financial Management Analyst, Executive Aides, Student Help Staff, Temporary clerical support, Accountant, Consultant, Extension Agents, IT Administrator, Visiting Scholar, Temp hourly instructional, website developers. Less Than 50% Completed GOVERNOR, TEXAS OFFICE OF THE State Fiscal Stabilization Fund -Education Fund. Education Fund- for the support of public elementary, secondary, and post secondary education and, as, applicable, early childhood education programs and services. Education Fund- for the support of public elementary, secondary, and post secondary education and, as, applicable, early childhood education programs and services. Instructional and non-instructional staff employed by school districts and open enrollment charter schools, including teachers, educational aides, support staff, administrators, counselors, librarians, school nurses, federal program directors and speech pathologists. Place of performance (city, state, zip code) AUSTIN, Texas 787011935 Less Than 50% Completed 1181 EXECUTIVE OFFICE OF THE STATE OF VERMONT State Fiscal Stabilization Fund - Education Fund Education Fund - for the support of public, elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. Education Fund- For the support of public, elementary, secondary, and postsecondary education and, as applicable, early childhood education programs and services. Preschool/PreKindergarten Teachers, Kindergarten Teachers, Elementary Teachers (Grades 1-6), Secondary Teachers (Grades 7-12), Teachers of Ungraded Classes (include EEE, Special Ed.), Teachers Aides - (PAID only), Guidance Counselors/Directors - Elem (Grades 1-6), Guidance Counselors/Directors - Sec (Grades 7-12), Nurses, Admin. Assists., Clerical & Secretarial Support Staff, Athletic Directors, Audiovisual & Instructional Technology Staff, Librarians, School Library Support Staff, Superintendents, Assistant Superintendents, Principals, Assistant Principals, Business Managers, Maintenance and Security. Although it is impossible to know whether these jobs or others would have been eliminated in the absence of ARRA-funding, these jobs were in existence and are maintained with funds which will be reimbursed from the State Fiscal Stabilization Fund. Since only expenditures of ARRA funds received are reportable by the State, the expenditures made by the State will be reported in the period in which the federal ARRA reimbursement for those expenditures is made. Place of performance (city, state, zip code) Montpelier, Vermont 056090003 Less Than 50% Completed State Fiscal Stabilization Fund (SFSF) - Education State Grants, Recovery Act State Fiscal Stabilization Fund (SFSF) - Education State Grants, Recovery Act To support and restore funding for elementary, secondary, and postsecondary education and, as applicable, early child hood education programs and services in States and local ed For the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. 6357.1000000000004 Jobs accounted for during the quarter ended 3/31/2010 represent employment types including: 5981.1 teachers, 2 bus drivers, 15.5 technology support, 36.8 Principals, 93.6 instructional assistants, 16.8 guidance councelors, 15 librarians, 67.5 aids, 50.8 clerical staff, 11 custodians, 9.3 truancy officers, 6 nurses, and 51.7 other. This total is made up of 6235.4 saved positions and 121.7 created positions. Less Than 50% Completed EXECUTIVE OFFICE OF THE STATE OF WASHINGTON State Fiscal Stabilization Fund - Education Fund Education Fund - for the support of public elementary, secondary, postsecondary education, and, as applicable, early childhood education programs and services For the support of public elementary, secondary, postsecondary education, and, as applicable, early childhood education programs and services. K-12 education staff, including certificated teachers, teacher/classroom aides and other classified staff (administrative assistants, building operations, information services and other technical staff). Place of performance (city, state, zip code) Olympia, Washington 985040002 Less Than 50% Completed State Stabilization Fund- Education Fund Education Fund - for the support of public elementary, secondary, and postsecondary education and, as, applicable, early childhood education programs and services. For the support of public elementary and secondary education and, as applicable, early childhood education programs and services. 3937.3600000000001 Jobs created and retained include teachers, education aides, administrative assistants, custodians, bus drivers, principals, and information technology specialists. The Recovery Act provided supplemental funding for programs authorized by the Individuals with Disabilities Education Act (IDEA), as amended, the major federal statute that supports the provisions of early intervention and special education and related services for children, and youth with disabilities. Part B ($11.7 billion) provides funds to ensure that preschool and school-aged children with disabilities have access to a free and appropriate public education and is divided into two separate grant programs —Part B grants to states (for school-age children) and Part B preschool grants.. Our review focused only on Part B grants to states for school aged children. We assessed the transparency of descriptive information for IDEA Part B for school aged children awards available on Recovery.gov. We found that an estimated 4 percent met our transparency criteria, 9 percent significantly met our criteria, 87 percent partially met our criteria, and zero percent did not meet our criteria. Given that few descriptions met our transparency criteria we conducted a national survey of school districts to discover how they are using the funds. The information on IDEA is found in appendix V. The following award descriptions contained sufficient information on general purpose, scope and nature of activities, location, and expected outcomes to meet our transparency criteria. The award description information is taken directly from Recovery.gov. We did not edit it in any way, such as to correct typographical or grammatical errors. EDUCATION, HAWAII DEPT OF Grants to States for the Education of Children with Disabilities Assist states in providing special education and related services to children with disabilities in accordance with Part B of the Individuals with Disabilities Education Act (IDEA). Assist states in providing special education and related services to children with disabilities in accordance with Part B of the Individuals with Disabilities Education Act (IDEA). Funds were used up to the September 30, 2009 quarter, to pay for contracted special education- related services. Calculated 'jobs retained' were 346.86 for that quarter, as noted above, based on vendor hours of service. For the quarters ended December 31, 2009 and March 31, 2010, no additional expenditures were made. Therefore, the 'number of jobs' for this reporting quarter is zero. In the quarter ended September 30, 2009, jobs were create/retained totaling 346.86 FTEs, for contracted special education-related services. In that quarter, vendors provided services in the areas of school-based behavioral health services, and assistance to students diagnosed with the autism spectrum disorder. Based on vendor data and prime recipient- analyzed detailed records of minutes, and 445,834 hours of service, and standard cumulative hours since grant origination date of February 17, 2009 to September 30, 2009 of 1,285.33 hours, the FTE calculation was 346.86 for that quarter. For the quarters ended December 31, 2009 and March 31, 2010, no additional expenditures were made. Therefore, the 'number of jobs' for this reporting quarter is zero. Place of performance (city, state, zip code) Honolulu, Hawaii 968132403 More than 50% Completed Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Funds were just released to LEAs in January and most districts are still planning and goal setting, prior to expenditures. ALBANY 1, BIG HORN 1, CARBON 1, CARBON 2, CROOK 1, FREMONT 1, FREMONT 21, FREMONT 24, FREMONT 6, HOT SPRINGS 1, JOHNSON 1, LARAMIE 1, LINCOLN 1, NATRONA 1, NIOBRARA 1, PARK 1, PLATTE 1, SHERIDAN 2, SUBLETTE 1, SUBLETTE 9, SWEETWATER 1, SWEETWATER 2, UINTA 4, WASHAKIE 2 Waiting on application approval, were just approved, not fully started, OR still in planning phase. BIG HORN 2 We are setting up spread sheets and budgets. BIG HORN 3 Purchased computers, amplification system, travel expenses, data management system, resource classroom equipment. All items were purchase ordered in February, but draw down will happen in March. BIG HORN 4 Projects have not been funded through Feb. 28, but we have expenses planned for March. CAMPBELL Have made some minor purchases and are in the process of starting our first large project this month. CONVERSE 1 Application approved and we are beginning the activities. Have generated purchase orders for activities and equipment. CONVERSE 2 Submitted purchase orders in line with our budgeted expenditures. Waiting for receipt of items on those purchase orders. We have not yet expended any funds. FREMONT 14 Purchase orders for materials and some staff training have been processed, but none have been submitted for payment. FREMONT 25 Grant approved in Feb, we are preparing budgets and getting bids/quotes for future purchases. We expect to start expending funds in March. GOSHEN 1 Application approved, action will begin in June 2010. LARAMIE 2 Planning continues. Some encumbrances have been made, but no expenditures - to date. LINCOLN 2 Built tracking device for ARRA spending, processed purchase requisitions, and report generation. PARK 16 Increased capacity and productivity by purchasing contract services for students w/disabilities. Occupational Therapy and psycholog 5.6600000000000001 LEAs have just begun to save or create jobs with this funding. The initial job information is as follows: BIG HORN 4 Admin Support has been given a stipend to help TVI director with the administration of ARRA funds. She stays after her normal workday to assist with purchasing, labeling tracking of funds. In March she worked 3 hours. She has not yet been paid her stipend for this time. FREMONT 1 The job created by this grant is 35 hours per week. There were 20 days worked in March. This position is 100% funded by ARRA grant. The district tells us they are tracking this with a time sheet. DB LARAMIE 1 Clerical work has begun. LINCOLN 2 Job 1: Administration services for managing ARRA funding and requests. I have logged 29 hours in the first quarter for administrative work which was 100% funded by ARRA. Job 2: Professional development for special educators--a series of six classes two hours each--total of 12 hours per teacher. There are currently 20 teachers enrolled estimating a total of 240 hours training. To date, we have a total of 104 hours completed. Hours of completion is based on actual attendance logs at each of the trainings. PARK 16 The job information listed is for contract services. PARK 6 Retained Case Manager and Job Coach positions; start date for both was 2/8/10 . Created Reading Teacher fully funded from this grant; start date was 2/23/10. Also created RtI Coordinator and ARRA Secretary positions to oversee all ARRA activities and expenditures. RtI Coordinator worked 184 hours this quarter, ARRA Secretary worked 167 hours this quarter. 47% of these positions are paid from this grant. SHERIDAN 2 2 Part time jobs created this quarter SWEETWATER 1 Part time administrative assistant was hired to coordinate professional development. The ARRA funded admin. assistant submits a monthly report documenting ARRA hours. SWEETWATER 2 Hours reported were for after school tutoring positions, staff development, and ELL translation. TETON 1 Admin program development, oversight and compliance. ALBANY 1, BIG HORN 1, BIG HORN 2, BIG HORN 3, CAMPBELL 1, CARBON 1, CARBON 2, CONVERSE 1, CONVERSE 2, CROOK 1, FREMONT 14, FREMONT 21, FREMONT 24, FREMONT 25, FREMONT 6, GOSHEN 1, HOT SPRINGS 1, JOHNSON 1, LARAMIE 2, LINCOLN 1, NATRONA 1, NIOBRARA 1, PARK 1, PLATTE 1, PLATTE 2, SHERIDAN 1, SHERIDAN 3, SUBLETTE 1, SUBLETTE 9, UINTA 1, UINTA 4, UINTA 6, WASHAKIE 2, WESTON 1, WESTON 7 No jobs this quarter. The following award descriptions contained most but not all details on one or more of the following pieces of information necessary to facilitate general understanding of the award, based on our criteria: general purpose, scope and nature of activities, location, or expected outcomes. The award description information is taken directly from Recovery.gov. We did not edit it in any way, such as to correct typographical or grammatical errors. EDUCATION, GEORGIA DEPARTMENT OF IDEA Part B Flow Thru - ARRA H391A090073A The Individuals with Disabilities Education Act (IDEA) 2004, Section 611 ensures that all children with disabilities have available to them a free appropriate public education (FAPE) in the least restrictive environment that emphasizes special education and related services designed to meet their unique needs and prepare them for further education, employment and independent living. IDEA funds are used to assist LEAs with the excess costs of providing special education and related service to students with disabilities; provide LEAs with assistive technology, alternative materials and programs and positive behavioral supports; and, support LEAs to collect, manage, analyze and report data through their district to enhance school system improvement strategies and results for students with disabilities. Funded programs must use instructional strategies based on scientifically based research and implement parental involvement activities. Teachers (693.30); Aides & Paraprofessionals (1528.57); Clerical Staff (27.95); Interpreter (2.63); Technology Specialist (4.00); School Nurse (2.69); Physical Therapist (5.50); Teacher Support Specialist (55.47); Secondary Counselor (3.00); School Psychologist (22.33); School Social Worker (3.91); Family Services/Parent Coordinator (5.00); Bus Drivers (57.30); Other Management (21.07); Other Administration (89.79); Other Salaries & Compensation (11.38); Speech Language Therapist (2.95); Other (15.26) Place of performance (city, state, zip code) Atlanta, Georgia 303349049 Less Than 50% Completed EDUCATION, INDIANA DEPARTMENT OF Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Used for hiring and retaining staff, and purchasing equipment. Education of Children with Disabilities (ages 3-21) Special education teachers, aides and related services personnel such as occupational/physical therapists, job coaches, music therapists, mental health therapists, audiologists, psychologists and coordinators. Place of performance (city, state, zip code) Less Than 50% Completed Grants to States for the Education of Children with Disabilites Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA Local educational agencies primarily used the funds to retain elementary, middle and high school positions such as: special education teachers, ECE instructional assistants, psychologist, therapist, interpreters and paraprofessionals. The positions were retained to provide continuation services to special needs students and also provide differentiated instruction targeted at each individual student’s needs. Place of performance (city, state, zip code) The following award descriptions did not contain sufficient details on one or more of the following pieces of information necessary to facilitate general understanding of the award, based on our criteria: general purpose, scope and nature of activities, location, or expected outcomes. The award description information is taken directly from Recovery.gov. We did not edit it in any way, such as to correct typographical or grammatical errors. EDUCATION, ALABAMA DEPT OF Special Education - Grants to States, Recovery Act / State Grants Provide a free and appropriate public education to all children with disabilities. Provide a free and appropriate public education to all children with disabilities. Place of performance (city, state, zip code) Less Than 50% Completed 132 EDUCATION & EARLY DEVELOPMENT, ALASKA DEPARTMENT OF Grants to states for the education of Children with Disabilities Assist State in Providing Special Education and related services to children with disabilities in accordance with Part B of the IDEA. To date, 52 of 54 districts in the state have received an ARRA award under this GAN. Assist State in Providing Special Education and related services to children with disabilities in accordance with Part B of the IDEA Teaching and Support Staff. The number of jobs reported was calculated in a manner consistent with OMB Memo 10-08 (December 18, 2009) Less Than 50% Completed IDEA Grants to States Part B Sec 611 Recovery Act To provide grants to States to assist them in providing a free appropriate public education to all children with disabilities. Ensure that all children with disabilities have available to them a free appropriate public education that emphasizes special education and related services designed to meet their unique needs and prepare them for further education, employment and independent living. Paraprofessionals, transition coordinators, special education teachers, occupational therapists, speech-language pathologists. EDUCATION, ARKANSAS DEPARTMENT OF Grants to States for Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Place of performance (city, state, zip code) Little Rock, Arkansas 722010000 Less Than 50% Completed EDUCATION, CALIFORNIA DEPARTMENT OF Grants to States for the Education of Children with Disabilities Special Education Grants to States, Recovery Act funds to assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. The Special Education Grants to States, Recovery Act funds are provided to ensure that children with disabilities have access to a free appropriate public education to meet each child’s unique needs and prepare each child for further education, employment, and independent living. The uses of funds under the Special Education Grants to States, Recovery Act are to be consistent with the current IDEA, Part B statutory and regulatory requirements. Some of the valid uses of the funds may include: (1) purchases of equipment for student use in instruction, (2) purchases of workstations for student use, (3) purchases of new resources and materials for use in instruction, (4) provide intensive professional development on evidence-based practices for academics and behavior, and (5) expand staff to support closing the achievement gap. 5715.5699999999997 Jobs created or retained include 3160.76 classified jobs, 2359.00 certificated jobs, 193.81 vendor jobs, and 0.00 IHE jobs. Classified jobs include non-teaching positions such as food service, bus drivers, teacher assistants, custodians, office staff, librarians, and instructional aides for special education. Certificated jobs include teaching positions. Vendor jobs represent a variety of different types of jobs. Place of performance (city, state, zip code) Less Than 50% Completed EDUCATION, COLORADO BOARD OF Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA Special Education Certified Teachers, Speech Therapists/Pathologists, School Psychologists, Severe Needs Paraprofessionals, Social Workers, Program Coordinators and Directors, Autism Specialists, Grant Accountants, Data Analysts, Hearing and Vision Screener, Occupationsl Therapists, Nurses, Physical Therapists, Administrative, Consultants. Place of performance (city, state, zip code) Less Than 50% Completed Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. 744.16999999999996 For Central Administration staff, 7.82 jobs created and 8.24 jobs retained. For Teachers/Instructors/Department Heads staff, 151.87 jobs created and 226.92 jobs retained. For Paraprofessionals staff, 143.42 jobs created and 156.42 jobs retained. For Clerical Support staff, 7.19 jobs created and 4.63 jobs retained. For Guidance Counselors staff, 4.74 jobs created and 1.50 jobs retained. For School Nurse/Health Services staff, 2.66 jobs created and 2.86 jobs retained. For Maintenance Personnel staff, 0.00 jobs created and 0.08 jobs retained. For Technical/Computer Specialists staff, 0.82 jobs created and 2.00 jobs retained. For Library/Media staff, 0.00 jobs created and 0.00 jobs retained. For Food Services staff, 0.00 jobs created and 0.00 jobs retained. For Athletics/Coaches staff, 0.00 jobs created and 0.00 jobs retained. For Class Advisors staff, 0.00 jobs created and 0.00 jobs retained. For All Outside Consultants and Vendors except for RESCs and SERC staff, 16.63 jobs created and 6.37 jobs retained. Place of performance (city, state, zip code) Less Than 50% Completed EDUCATION, DELAWARE DEPARTMENT OF State Grants - Special Education To enhance and supplement services provided by IDEA and to cushion the progam from the current economic conditions. To enhance and supplement the IDEA progam and cushion it from the current negative economic financial conditions Funding was used to increase the number of services available to Special Ed Students including the need to hire additional staffing to serve them. Place of performance (city, state, zip code) Less Than 50% Completed Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Place of performance (city, state, zip code) Washington, District of Columbia 200020000 Less Than 50% Completed EDUCATION, FLORIDA DEPARTMENT OF Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. The majority of the jobs paid for with ARRA funds related to instruction or instructional support. Types of jobs included but were not limited to classroom teacher, paraprofessionals, career specialists, school-based administrators, clerical, supervisors, guidance counselors, pre-kindergarten teachers, psychologists, social workers, and technicians. Less Than 50% Completed IDAHO STATE BOARD OF EDUCATION Grants to States for the Education of Children with Disabilities Assist State in providing special education and related services to children with disabilities in accordance with Part B of IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Special Education 77.44% Teacher/Teacher Aides, 14.21% School/District Administration/Office Support, 5.05% Speech/Physical/Occupational/Behavioral/Other Therapists, 0.76% IEP Services, 0.53% Nurses, 0.50% Social Workers, 0.42% Interpriter, 0.27% Special Education Data Plan Work, 0.20% PSR Facilitator, 0.19% Job Coaches, 0.16% IBI Services, 0.10% Day Tratement, 0.10% Professional Development, 0.07% other services. Less Than 50% Completed EDUCATION, ILLINOIS STATE BOARD OF Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of IDEA Assist States in providing special education and related services to children with disabilities in accordance with Part B of IDEA Education, Training and Library Occupations, Office and Administrative Support Occupations, Management Occupations, Computer and Mathmatical Occupations, Life, Physical and Social Science Occupations, Community and Social Service Occupations, Health Practitioners, Building and Grounds Cleaning and Maintenance Occupations, Personal Care and Service Occupations, Installation, Maintenance and Repair Occupations, Healthcare Support Occupations, Food Preparation and Serving Related Occupations, Construction and Extraction Occupations, Transportation and Material Moving Occupations. Place of performance (city, state, zip code) Springfield, Illinois 627770002 Less Than 50% Completed EDUCATION, IOWA DEPARTMENT OF Assist States in providing special education and related services to children with disabilities in accordance with Part B of IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of IDEA. Place of performance (city, state, zip code) Des Moines,,, Iowa 503190000 Less Than 50% Completed Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Place of performance (city, state, zip code) Topeka, Kansas 666121103 Less Than 50% Completed EDUCATION, MAINE DEPARTMENT OF Individuals with Disabilities Education Act Grants to States "Recovery Act" IDEA Special Education Grant to the State for distribution the the school administrative units. IDEA Special Education Grant to the State for distribution the the school administrative units. Place of performance (city, state, zip code) More than 50% Completed EDUCATION, MARYLAND DEPARTMENT OF Grants to states for the education of children with disabilities To assist states in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist states in providing special education and related services to children with disabilities in accordance with Part B of the IDEA Teachers, Teaching Assistants, Student Services, Staff Development workshop staff, Dropout Prevention Specialists, Technology Specialist, Behavior Specialists, Psychologist Intern, Physical Therapists, Occupational Therapists, Sign Language Interpreter, Speech Therapists, Reading Intervention Tutors. Place of performance (city, state, zip code) Baltimore, Maryland 212012549 Less Than 50% Completed DEPARTMENT OF ELEMENTARY AND SECONDARY EDUCATION Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the Individuals with Disabilities Education Act. Support special education and related services to children with disabilities. 1694.9100000000001 Special education teachers, paraprofessionals, and service providers were hired or retained. Place of performance (city, state, zip code) Less Than 50% Completed Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Teachers of students with Cognitive Impairment, Emotional Impairment Autism Spectrum Disorder, Visual Impairment and Early Childhood Special Education and Resource Room Teachers; School Psychologists; School Social Workers; Para-Professionals; Assistive Technology Staff and Assistants; Speech Therapist and Pathologists; Special Education Supervisors and Directors; Behavior Specialists; Response to Intervention Specialists, Coaches, Aides and Consultants; Transition Coordinators; Occupational Therapists; Vocational Education Coordinators; Technology Interventionist; Diagnostic Aide; Curriculum Consultants; Professional Development and Training Coordinators; Administrative Support Staff; Reading Teachers and Literacy Consultants; Special Education Planners/Coordinators and Compliance Staff; Positive Behavior Support - Behavior Specialists; Music Therapist; Emotionally Impaired Crisis Aides; Differentiated Instruction Educational Coaches; Curriculum Specialists; Instructional Trainers for Special Education Teachers; Technology and Data support. Place of performance (city, state, zip code) Less Than 50% Completed EDUCATION, MINNESOTA, DEPARTMENT OF Grants to State for the Education of Children with Disabilities Assist states in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist states in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Types of jobs created or retained with this grant include administration/supervision, cultural liaison, educational speech/language pathologist, licensed instructional support, mental health professional,licensed nursing services, non-instructional support, non- licensed classroom personnel, non-licensed instructional support, paraprofessional, physical/occupational therapist, school psychologist, school nurse, social worker, substitute teacher salaries, teachers, and other. Place of performance (city, state, zip code) Less Than 50% Completed ELEMENTARY AND SECONDARY EDUCATION, MISSOURI DEPARTMENT OF Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Place of performance (city, state, zip code) Jefferson City, Missouri 651012901 Less Than 50% Completed PUBLIC INSTRUCTION, MONTANA OFFICE OF Grants to States for the Education of Children with Disabilities Assist states in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Teachers, aides, specialists, and administrative staff needed to provide special education instruction and related services for K-12 elementary and secondary schools. Teachers, aides, specialists, and administrative staff needed to provide special education instruction and related services for K-12 elementary and secondary schools. Place of performance (city, state, zip code) Less Than 50% Completed EDUCATION, NEBRASKA DEPARTMENT OF Grants to States for the Education of Childrenwith Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of IDEA Assist States in providing special education and related services to children with disabilities in accordance with Part B of IDEA Positions created or retained were to provide a free appropriate public education for students with verified disabilities. Place of performance (city, state, zip code) Less Than 50% Completed EDUCATION, NEVADA DEPARTMENT OF Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilitiesin accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilites in accordance with Part Bof IDEA. 45.2 FTE Teachers jobs paid by ARRA funds. 48.37 FTE Teachers aide jobs paid with ARRA funds. .62 FTE Speech Therapist job paid by ARRA funds. 2.06 Support staff paid with ARRA funds. .33 Nurse FTE paid with ARRA funds. Place of performance (city, state, zip code) Carson City, Nevada 897015096 Less Than 50% Completed EDUCATION, NEW JERSEY DEPARTMENT OF Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. A total of 1396.0 jobs were created or retained. Of those, 729.4 were instructional positions, 314.0 were student support services positions, 22.0 were administrative positions and 330.6 did not indicate a job classification. We provide funds on a reimbursement basis, and therefore it is not unusual for LEAs to report jobs created or retained prior to actually receiving the funds. Place of performance (city, state, zip code) Trenton, New Jersey 086250500 Less Than 50% Completed NEW MEXICO EDUCATION, DEPARTMENT OF Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA Assist States in providing special education and related services to children with disabilities in accordance with Part B of the Individuals with Disabilities Education Act (IDEA-B). IDEA-B allocations that are funded by ARRA are formula driven flow-through allocations to LEAs. 169.72 For the current quarter, Local Educational Agencies (LEA) have reported that jobs created or saved included teachers, related service providers, and instructional assistants. The creation of the new teaching jobs helped reduce the student teacher ratio in classrooms in New Mexico. This allowed students to receive a more individualized education tailored to meet their unique needs. The additional related service providers allowed students with disabilities to receive additional therapy services to assist them in the educational setting. Instructional assistants provide students with disabilities with needed instructional support and assistance with behavioral and/or medical needs. Place of performance (city, state, zip code) SANTA FE, New Mexico 875012744 Less Than 50% Completed NEW YORK STATE EDUCATION DEPARTMENT Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA Commercial and Institutional Building Construction New York State used the ARRA IDEA grants for sub-recipients receiving IDEA funds and used part of these funds to save or create jobs. These programs were implemented consistent with federal IDEA requirements and it is expected that sub-recipients will report additional jobs saved or created in future quarters. Place of performance (city, state, zip code) Albany, New York 122340000 Less Than 50% Completed PUBLIC INSTRUCTION, NORTH CAROLINA DEPARTMENT OF Grants to states for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Director and/or Supervisor (113) Person assigned to direct or supervise staff members, a function, a program, or a support service. Teacher (121)Person certified to teach the standard course of studies and assigned to instruct pupils not classified elsewhere New Teacher Orientation (125) Person attending assigned new teacher orientation, outside of the teacher's contract calendar, not to exceed 3 days.Re-employed Retired Teacher - Exempt from the Earnings Cap (128)Retired teachers hired back into the classroom.Instructional Support I (131)Person assigned duties that require a high degree of knowledge and skills, in support of the instructional program. Duties include health services, attendance counseling, guidance services, media services, and nurses.Instructional Support II (132)Person assigned duties that require a high degree of knowledge and skills which place them on the advanced pay scale. Includes speech and audiologists Psychologist (133)Person assigned to perform duties involving psychology.Teacher Mentor (134) Individuals who are employed to serve as full-time mentors to teachers only.Lead Teacher (135)Includes curriculum specialists, instructional facilitators, as well as lead teachers in the summer school program. Teacher Assistant (141)Person assigned to assist with students in roles without the extra education required for NCLB. Examples include personal care assistants and physical therapy assistants.Teacher Assistant ? NCLB (142) Person assigned to perform the day- to-day activities of assisting the regular classroom teacher, in roles requiring the extra education of NCLB.Tutor (Within the instructional day) (143) Person assigned to perform tutorial duties. Interpreter, Braillist, Translator, Education Interpreter (144) Person assigned to perform the activities of an interpreter, brail, translator, or education interpreter, and their assistants.Therapist (145) Person assigned to perform the activities of physical or occupational therapy. Includes the positions of physical therapist, occupational therapist.Specialist (School-Based) (146) Person assigned to perform technical activities in a support capacity such as data collection, compiling research data, preparing statistical reports, technology and other technical duties. Includes the positions such as certified nurses, computer lab assistants, technology assistants, CTE tech assistants, and behavioral modification techs, parent liaisons, and home school coordinators.Monitor (147)Person assigned to perform the activities of a monitor - bus monitors, lunchroom monitors, and playground monitors. Office Support (151)Person assigned to perform activities concerned with preparing, transferring, transcribing, systemizing, or filing written communications and records. Includes secretary, accounting personnel, admin assistant, photocopy clerk, file clerk, NCWise specialist, clerical specialist in a central office role, cost clerk, and school-based office personnel Driver (171)Person whose assignment consists primarily of driving a vehicle, such as a bus, truck, or automobile. Place of performance (city, state, zip code) Raleigh, North Carolina 276011058 Less Than 50% Completed PUBLIC INSTRUCTION, NORTH DAKOTA DEPARTMENT OF IDEA-B Grants for Children with Disabilities H391A090049A Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Place of performance (city, state, zip code) Bismarck, North Dakota 585050602 Less Than 50% Completed Special Education - Grants to States, Recovery Act Special Education - Grants to States, Recovery Act The purposes of the Individuals with Disabilities Education Act (IDEA) are to ensure that all children with disabilities have available to them a free appropriate public education (FAPE) that emphasizes special education and related services designed to meet their unique needs and prepare them for further education, employment and independent living; to ensure that the rights of children with disabilities and parents of such children are protected; and to assist States, localities, educational service agencies, and Federal agencies to provide for the education of all children with disabilities; to assist States in the implementation of a statewide, comprehensive, coordinated, multidisciplanary, interagency system of early intervening services for infants and toddlers with disabilities and their families; to ensure that educators and parents have the necessary tools to improve educational results for children with disabilities by supporting system improvement activities; coordinated research and personnel preparation; corrdinated technical assistance, dissemination, and support; and technology developement and media services; and to assess, and ensure the effectiveness of, efforts to educate children with disabilities. Intervention Specialists, licensed as a Special Education Teachers , special education aide, director of pupil services, tutoring, paraprofessional positions, support staff, Behavior Intervention Specialist,Transition Services Coordinator, Special Education Compliancy Coordinator,Federal adminstrator, speech and psychologist services, Medical Assistant, Special Education Bus Driver, Reading Specialists,Brailist, Literacy Coaches, ESL Liaison, Secondary Curriculum specialist, Special Services Liaison. Place of performance (city, state, zip code) Columbus, Ohio 432154183 Less Than 50% Completed EDUCATION, OKLAHOMA STATE DEPARTMENT OF Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Place of performance (city, state, zip code) Oklahoma City, Oklahoma 731054503 More than 50% Completed Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. These Recovery Act funds have been crucial to retain jobs to provided educational services to students with disabilities. Of jobs reported, 72% are those that have been retained. These positions include autism specialists, behavioral specialists, case managers, early interventionists, instructional assistants, literacy specialists, occupational therapists, psychologists, reading specialists, nurses, special education teachers, speech and language pathologists, and transition specialists. Place of performance (city, state, zip code) Less Than 50% Completed EDUCATION, PENNSYLVANIA DEPT OF GRANTS TO STATES FOR THE EDUCATION OF CHILDREN WITH DISABILI Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Reflects sub-recipient submitted information on educators and other support staff providing services detailed in the Project Description for the current reporting quarter for this award. Place of performance (city, state, zip code) Less Than 50% Completed DEPARTMENT OF EDUCATION, SOUTH CAROLINA Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to students with disabilities in accordance with Part B of the IDEA. Place of performance (city, state, zip code) Columbia, South Carolina 292013730 Less Than 50% Completed Grants to States for the Education of Children with Disabilities. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Teacher and paraprofessional positions were created to assist school districts in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Place of performance (city, state, zip code) Pierre, South Dakota 575012291 Less Than 50% Completed EDUCATION, TENNESSEE DEPARTMENT OF Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA The hiring and retaining of special education teachers, paraprofessionals, support and related service personnel to provide free appropriate public education to children with disabilities. Place of performance (city, state, zip code) Less Than 50% Completed Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. The positions created or retained during this period included professional jobs as well as positions for support staff. The major job categories include counselors, teachers, educational aides, administrators, and speech pathologists. Place of performance (city, state, zip code) Less Than 50% Completed Grants to States for the Education of Children with Diabilities Ensure that all children with disabilities have available to them a free appropriate public education that emphasizes special education and related services designed to meet their unique needs and prepare them for further education, employment, and indepe Ensure that all children with disabilities have available to them a free appropriate public education that emphasizes special education and related services designed to meet their unique needs and prepare them for further education, employment, and independent living. Place of performance (city, state, zip code) Less Than 50% Completed EDUCATION, VIRGINIA DEPARTMENT OF Special Education Grants to States, Recovery Act Special Education Grants to States, Recovery Act To provide grants to States to assist them in providing a free appropriate public education to all children with disabilities. Funds are used by State and local educational agencies, in accordance with the IDEA, to help provide the special education and related services needed to make a free appropriate public education available to all eligible children and, in some cases, early intervening services. Jobs accounted for during the quarter ended 3/31/2010 represent employment types such as: special education teachers, councelors, psychologists, special education services coordinators, and early intervention specialists. This total is made up of 600.5 saved positions and 310.6 created positions. Place of performance (city, state, zip code) Less Than 50% Completed PUBLIC INSTRUCTION, WASHINGTON STATE SUPERINTENDENT OF Grants to States for the Education of Children with Disabilities H391A090074A Assist States in Providing Special Education and Related Services to Children with Disabilities in Accordance with Part B of the IDEA. Assist States in Providing Special Education and Related Services to Children with Disabilities in Accordance with Part B of the IDEA. Place of performance (city, state, zip code) Less Than 50% Completed PUBLIC INSTRUCTION, WISCONSIN DEPT OF Grants to States for the Education of Children with Disabilities Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. Assist States in providing special education and related services to children with disabilities in accordance with Part B of the IDEA. The types of jobs created and/or retained as a result of the American Recovery and Reinvestment Act at the local district level include: special education teachers, special education paraprofessionals, substitute special education teachers, special education administrative assistants, transition coordinators, speech and language therapists, occupational therapists and assistants, school psychologists, social workers, directors of special education, special education program support teachers and coordinators, assistive technology personnel, diagnosticians, behavioral analysts, audiologists, orientation and mobility specialists, special education transportation providers, and personnel supporting infrastructure investments (i.e. electricians, construction workers, etc.). An ARRA Coordinator position has also been created at the SEA level. Place of performance (city, state, zip code) The Recovery Act provides $10 billion to help local educational agencies (LEA) educate disadvantaged youth by making additional funds available beyond those regularly allocated through Title I, Part A of the Elementary and Secondary Education Act of 1965 as amended (ESEA). These additional funds are to be distributed through states to LEAs using existing federal funding formulas, which target funds based on such factors as high concentrations of students from families living in poverty. In using the funds, LEAs are required to comply with current statutory and regulatory requirements and must obligate 85 percent of the funds by September 30, 2010. Education is advising LEAs to use the funds in ways that will build the agencies’ long-term capacity to serve disadvantaged youth, such as through providing professional development to teachers. All states and the District of Columbia received Recovery Act grant awards for the three education programs included in our review. However, award-related information for the following prime recipients was not available on Recovery.gov during the period of our review, and therefore these states were excused from our analysis: Rhode Island was not included in the number of awards for ESEA Title I because it was granted a reporting waiver by Education. Utah was not included in the number of Recovery Act Title I awards because Education reported that it failed to submit its 1512 reports by the deadline, primarily because of various technical issues. The following award descriptions contained sufficient information on general purpose, scope and nature of activities, location, and expected outcomes to meet our transparency criteria. The award description information is taken directly from Recovery.gov. We did not edit it in any way, such as to correct typographical or grammatical errors. EDUCATION, HAWAII DEPT OF Title I, Part A -- Improving Basic Programs Operated by Local Educational Agencies. Initial project provided Extended Learning Opportunities ('ELO') during summer 2009 for economically disadvantaged students. Improve teaching and learning for students most at risk of failing to meet state academic achievement standards. Third Quarter activities provided more Extended Learning Opportunities ('ELO') during school year 2009-10 for after-school and other non-school hour time periods such as 'intersessions,' for economically disadvantaged students, struggling to demonstrate grade level proficiency in English Language Arts ('ELA') and Mathematics, as measured by the Hawaii State Assessment ('HSA'). In addition, this quarter's activities included payments to vendors for the ELO Summer 2009 program, The initial Title I Recovery Act project provided Extended Learning Opportunities ('ELO') during summer 2009 for the same types of students. Students' growth is measured by teacher-developed assessments; school quarterly assessments; and the HSA. During the first quarter ELO in 2009, 8,018 students participated in the program, with an average of 76% showing improvement over the course of the program. First Quarter included 202.49 FTE for an initial Title I Recovery Act Extended Learning Opportunities ('ELO') project. Second Quarter included 13.40 additional FTEs, to provide more ELO services during the 2009-10 school year at 35 schools so far, for after-school and other non-school hour time periods such as 'intersessions,' with part-time teachers, tutors, and other support staff. Third Quarter included 43.78 FTEs, providing ELO services during the 2009-10 school year, for after-school and other non-school hour time periods, with part-time teachers, tutors, and other staff. ELO has provided a stimulus to the local economy by providing additional employment opportunities during the summer of 2009, and during school year 2009-10. The summer program was held at 90 campuses statewide, with 8,018 students who participated. These students were supported by 1,146 staff members during the summer, equating to 202.49 FTE for the First Quarter, based on 105,295.50 hours worked, divided by 520 standard hours for the quarter, as noted in the 'Number of Jobs' total in the preceding reporting data field, in accordance with U.S. Department of Education specific guidance. These employees hired included part-time and substitute teachers; program directors; para-professionals; and other support positions. Place of performance (city, state, zip code) Honolulu, Hawaii 968132403 Less Than 50% Completed Title 1, Part A - Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. BIG HORN 1, 2, 4 CARBON 1, 2 CONVERSE 2 CROOK 1 FREMONT 1, 21, 24, 38 , 6 HOT SPRINGS 1 JOHNSON 1 LARAMIE 1, 2 NATRONA 1 PARK 1 PLATTE 1, 2 SHERIDAN 2 SUBLETTE 1, 9 SWEETWATER 2 TETON 1 UINTA 1, 6 WESTON 1,7 WASHAKIE 2 Just getting started. PARK 16 Purchased a literacy intervention program as part of our current balanced literacy program called RIGBY Reading. Professional development workshops have been attended and many of the leveled books have been ordered. BIG HORN 3 PO for comp equip. Bought Ascend Math intervention licensing. 75% of computers are installed at the elementary school and in use. UINTA 4 Prof services for staff dev implemented. Reg and org dues paid for IRA Annual Convention, I Teach K conf and WYO NCA Spring Improvement Conf. SHERIDAN 1 Math Tutor works with students on a weekly basis. Tutors work with students in Homework Club/Friday School on a weekly basis. CONVERSE 1 Job ad and interviews for T1 . Retained teacher planning the 2nd sem. Purchased supports required for parent involvement activities. Conf travel expenditures. LINCOLN 2 Built tracking devices for ARRA. Ordered books. Set up tracking system for Prof Dev activities. PARK 6 Hired 6 new positions. Four T1 Teachers, RTI Coordinator and ARRA Secretary SWEETWATER 1 Expanded before and after school programs at 2 Title I schools. Parent liaison is beginning to provide parenti nvolvement activities and support to T1 parents. Title I sec working additional hours manage requirements for the ARRA funds. FREMONT 25 Continued to evaluate our program and process purchase orders for future expenditures. CAMPBELL 1 - Hired 4 positions. 1 more to be filled. Started 2 FAST cycles at schools. Ordering technology and starting staff development. CAMPBELL 1 Purchased computers and supplies for students. AMANDA SCHAFER Doing help desk to assist districts and proceeding according to the contract. PARK 1 SWEETWATER 1 ALBANY 1 SHERIDAN 2 LINCOLN 1 N 13.06 Some LEAs have begun to add funded jobs this month, as follows: ALBANY 1, BIG HORN 1, BIG HORN 2, BIG HORN 3, BIG HORN 4, CAMPBELL 1, CARBON 1, CARBON 2, CONVERSE 2, CROOK 1, FREMONT 1, FREMONT 14, FREMONT 21, FREMONT 38, FREMONT 6, GOSHEN 1, HOT SPRINGS 1, JOHNSON 1, LARAMIE 1, LARAMIE 2, LINCOLN 1, NATRONA 1, NIOBRARA 1, PARK 1, PARK 16, SHERIDAN 2, SUBLETTE 1, SUBLETTE 9, SWEETWATER 1, UINTA 1, UINTA 4, UINTA 6, WESTON 1, WESTON 7 No jobs impact this quarter. WASHAKIE 2 Full Time Title I Paraeducator funded by ARRA for 86% of the work day worked 60 days at 7 hours a day for Quarter 1. FREMONT 24 A teacher worked a total of 3 additional hours for an extended school day. This was funded 100% with ARRA T1 funds. PLATTE 2 Retained 1 Teacher (Ayers) and 1 Para (Wambach). Both began work on February 1, 2010 and worked a 35 hour weekly schedule through the end of the quarter (8 weeks x 35 hours). SHERIDAN 1 A math tutor was hired to help students in Title I which will be reimbursed 100% with ARRA funds. 6 Tutors were hired to help students in Title I at Homework Club/Friday School (after hours) and worked a total of 36 hours, funded at 45% from TETON 1 Admin hours 1/2 half funded by Title IA for program development, oversight, and compliance. SWEETWATER 2 Created Title I at Granger School. Additional hours reported were substitute teachers for training of teachers, and pay for classroom aides to attend training. CONVERSE 1 Title I teacher is retained. PLATTE 1 Sub teachers for 7 days at 7 hours per day, all funded by ARRA funds LINCOLN 2 Administrative services for maintaining ARRA funding and spending requests. Professional Development/Inspiring Education for teachers--this will be a six session course of 2 hours per session with currently 119 teachers enrolled. Estimated hours of training 1,428. This project is near 50% complete with a total of 611 training hours complete. PARK 6 Four new Title 1 Teacher positions were created that were fully funded by ARRA. All 4 started on 2/18/2010. Also created RtI Coordinator and ARRA Secretary positions to oversee all ARRA activities and expenditures, funded 51%. SWEETWATER 1 Teachers are providing an extra 1/2 to one hour of instruction per day for T I students during before and after school programs. Parent liaison has been hired to provide parent involvement activities and support in Title I schools.The Title I secretary is working additional hours to help with ARRA fiscal and program needs. FREMONT 25 Two classified aide positions were filled during the month of March 2010 CAMPBELL 1 Expanded our Ready 4 Learning program by 2 class room adding 2 full time teachers. Added a Title I Resource Center Clerk to help in the center while ARRA funds are being distributed. This is a full time position that was added at the beginning of March. Added a part time Title I ESL Assistant to one school which was added in March. Position is 40% out of ARRA. LARAMIE 1 Clerical work has begun. PLATTE 2 1 full time teaching position was retained and funded from February 1, 2010 to the end of the quarter. With short Fridays, this averages to be 35 hours per week.1 full time para educator was hired beginning February 16, 2010, and worked until the end of the quarter. AMANDA SCHAFER Amanda is a hired consultant that assists with page design and grant design, along with help desk efforts. An estimated 70% of her working hours are funded by this ARRA project for January, and 100% for February and the future. She worked 40 hours a week for the entire quarter so far. The following award descriptions contained most but not all details on one or more of the following pieces of information necessary to facilitate general understanding of the award, based on our criteria: general purpose, scope and nature of activities, location, or expected outcomes. The award description information is taken directly from Recovery.gov. We did not edit it in any way, such as to correct typographical or grammatical errors. EDUCATION, CALIFORNIA DEPARTMENT OF Title I, Part A--Improving Basic Programs Operated by Local Educational Agencies Title I - Grants to LEAs, Recovery Act funds to improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Title I - Grants to LEAs, Recovery Act funds provided to assist LEAs and schools that have high concentrations of students from families that live in poverty in order to help improve teaching and learning of students most at risk of failing to meet State Academic Achievement Standards. The uses of funds under Title I – Grants to LEAs, Recovery Act are to be consistent with the Title I, Part A and D statutory and regulatory requirements, including the requirements to provide equitable services to eligible private school students. Uses should be aligned with the core goals of the ARRA to save and create jobs and to advance reforms consistent with the requirements of Title I. Possible uses of funds may include: (1) establishing a system for identifying and training highly effective teachers to serve as instructional leaders in Title I schoolwide programs; (2) strengthening and expanding early childhood education by providing resources to align a district-wide Title I pre-K program with state early learning standards and state content standards for grades K–3; (3) providing new opportunities for Title I schoolwide programs for secondary school students to use high- quality, online courseware as supplemental learning materials for meeting mathematics and science requirements; and (4) using reading or mathematics coaches to provide professional development to teachers in Title I targeted assistance programs. Jobs created or retained include 1030.83 classified jobs, 3517.20 certificated jobs, 223.15 vendor jobs, and 0.00 IHE jobs. Classified jobs include non-teaching positions such as bilingual teacher assistants, office staff, district coordinators, and instructional aides. Certificated jobs include teaching positions. Vendor jobs represent a variety of different types of jobs. EDUCATION, GEORGIA DEPARTMENT OF Title I-A, Grants - ARRA Title I, Part A, is a formula grant program that provides financial assistance to LEAs and schools with high numbers or high percentages of poor children to help ensure that all children meet challenging state academic standards. Title I funds are used to provide additional academic support and learning opportunities to help low-achieving children master challenging curricula and meet state standards in core academic subjects. For example, funds support extra instruction in reading/English language arts, science, social studies, and mathematics, as well as, after-school, and summer programs to extend and reinforce the regular school curriculum. Funded programs must use instructional strategies based on scientifically based research and implement parental involvement activities. Teachers (947.85); Aides & Paraprofessionals (219.83); Clerical Staff (2.25); Technology Specialist (4.00); Teacher Support Specialist (3.75); Elementary Counselor (1.50); Secondary Counselor (23.50); Family Services/Parent Coordinator (68.57); Bus Drivers (20.80); Other Management (49.59); Other Administration (195.09); Other Salaries & Compensation (46.71); Other (53.18); Administrative Specialist - GaDOE (2.00) Place of performance (city, state, zip code) Less Than 50% Completed Title I, Part A--Improving Basic Program operated by Local Educational Agencies Improve teaching and learning for student most at risk of failing to meet State academic achievement standard. Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards 1024.26 Local educational agencies primarily used the funds to retain positions such as Title I teachers, instructional coaches, instructional assistants, paraprofessionals, preschool teachers, literacy specialists, curriculum specialists and teacher mentors. The positions were retained to improve the teaching and learning of targeted low performing students and schools. Job embedded professional development for elementary teachers and administrators were also provided. Place of performance (city, state, zip code) Less Than 50% Completed ADMINISTRATION, LOUISIANA DIVISION OF Title I Part A Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards. Title I ARRA Statement For Jobs Saved - Retained Districts have targeted 4 major areas regarding pending Title I Part A ARRA funding. The areas are as follows:(1) College and career-ready standards and high quality valid and reliable assessments for all students including ELL?s and students with disabilities.(2)PreK to Higher Education data systems that meet the principles in the America COMPETES Act.(3)Teacher effectiveness and equitable distributions of effective teachers and(4)Intensive support and effective interventions for lowest performing schools.All jobs that have been retained or saved are related to the 4 major areas of focus. They include Instructional coaches (Reading/ Math coaches) Graduation Coaches Reading/Math Interventionist Reading Content Leaders Professional Development Coordinators Pre-school teachers and Pre-school paraprofessionals Class size reduction teachers Technology Facilitators/Coaches Academic Behavior Counselors Turn Around specialist and Drop-out Interventionist and Curriculum Specialist. Place of performance (city, state, zip code) Place of performance (city, state, zip code) Less Than 50% Completed 26 EDUCATION, MARYLAND DEPARTMENT OF Title I, PartA--Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. The type of jobs created and retained includes teachers, paraprofessionals, coordinators, and other instructional and administrative support staff. These jobs enable local school systems and schools to maintain and in some cases upgrade the level of supplemental services to students failing or at-risk of failing who are enrolled in high poverty schools. The jobs created and retained data was obtained from reports submitted from each sub- recipient. Each sub-recipient report is maintained at the Maryland State Department of Education EDUCATION, NEBRASKA DEPARTMENT OF TITLE I, Part A Improving Basic Programs Operated by Local Educational Agencies Improving teaching and learning for students most at risk of failing to meet State academic achievement standards Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards Title I funds are used to provide services to meet the educational needs of low-achieving students and to work toward closing the achievement gap between high- and low- performing students. Place of performance (city, state, zip code) Less Than 50% Completed 248 EDUCATION, NEW HAMPSHIRE DEPARTMENT OF Title I, Part A - Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. The addition of Title I ARRA funds has resulted in an increase in the number of students, duration of services, resources utilized and the variety of intervention programs used to support school district's most academically at risk students. The personalized, supplemental services provided are expected to increase student achievement and decrease achievement gaps. Projects range in design and implementation, based on specific student and school needs and resources, but include supplemental instructional support in and outside the classroom as well as extended day learning opportunities and professional development opportunities to stafff. Title I ARRA funds have been used to secure previously funded Title I positions that would have been eliminated due to decreases in regular Title I funding to particular school districts. Title I ARRA funds have also been used to add positions in school districts including: teachers, tutors, paraprofessionals, content specialists, professional development coordinators and providers, project managers and various other positions. Through the creation and maintenance of these jobs, school districts have been able to strengthen existing programs as well as expand the number of students served (including increasing the number of Title I schools in districts) and provide additional professional development opportunities for staff. Place of performance (city, state, zip code) Concord, New Hampshire 033013852 Less Than 50% Completed NEW YORK STATE EDUCATION DEPARTMENT Title I, Part A -- Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards 4366.7200000000003 The Title I portion of the ARRA was an increase to the allocation under ESEA Sections 1125 and 1125A for Title I Part A. Sub-recipients of ARRA Title I included 650 public school districts and 150 charter school local educational agencies. Recipients used the funds primarily to cover compensatory education expenses not previously funded by Title I. The ability to pay for a higher proportion of allowable Title I positions freed up funds for other purposes including instructional positions and professional development opportunities for teachers such as through literacy coaching. ARRA Title I funds were used to save existing positions (especially in academic intervention services) and to create new ones (especially for professional development). Place of performance (city, state, zip code) The following award descriptions did not contain sufficient details on one or more of the following pieces of information necessary to facilitate general understanding of the award, based on our criteria: general purpose, scope and nature of activities, location, or expected outcomes. The award description information is taken directly from Recovery.gov. We did not edit it in any way, such as to correct typographical or grammatical errors. EDUCATION, ALABAMA DEPT OF Title I Grants to LEAs, Recovery Act Help local education agencies and schools improve the teaching and learning of children failing, or most at-risk of failing, to meet challenging State academic achievement standards. Help local education agencies and schools improve the teaching and learning of children failing, or most at-risk of failing, to meet challenging State academic achievement standards. Place of performance (city, state, zip code) Place of performance (city, state, zip code) Less Than 50% Completed Title 1 Grants to Local Educational Agencies, Recovery Act To help local educational agencies (LEAs) and schools improve the teaching and learning of children failing, or most at-risk of failing, to meet challenging State academic achievement standards. Improving the opportunity for disadvantage children and ensuring disadvantage children have a fair, equal, and significant opportunity to obtain a high-quality education and reach, at a minimum, proficiency on challenging State academic achievement standards and state academic assessments. Title I teachers, paraprofessionals, professional development positions and education coaches. Place of performance (city, state, zip code) Less Than 50% Completed 156 EDUCATION, ARKANSAS DEPARTMENT OF Title I, Part A - Improving Basic Programs Operated by Local Education Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards. 14.7 Unclassified Jobs Created * 28.18 Contracted Staff Jobs Created * 73.11 Licensed Staff Jobs Created * 29.97 Non- Licensed Staff Jobs Created * 32.35 Unclassified Jobs Retained * 7.19 Contracted Staff Jobs Retained * 37.365 Licensed Staff Jobs Retained * 10 Non- Licensed Staff Jobs Retained * Little Rock, Arkansas 722010000 Less Than 50% Completed EDUCATION, COLORADO BOARD OF Title I, Part A--Improving Basic Programs Operated by Local Educational Agencies. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards. Integration Specialists, Full and Part Time Teachers, English Language Teachers, Paraprofessionals, Literacy & Math Coaches, Classroom Assistants, Interventionists, Family Center Coordinators, Secretaries, Intervention School Director, Title I Coordinators, Consultants, Computer Technicians, Bookkeepers, Family and Community Outreach Liaisons, Onsite Technical Staffing, Mentors, Nurses, Administrative Staff, Counselors, Psychologists, Social Workers, Consultants. Less Than 50% Completed 114 Title I, Part A--Improving Basic Programs Operated by Local Educational Agencies. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards. For Central Administration staff, 4.70 jobs created and 6.66 jobs retained. For Teachers/Instructors/Department Heads staff, 125.62 jobs created and 196.14 jobs retained. For Paraprofessionals staff, 31.31 jobs created and 49.53 jobs retained. For Clerical Support staff, 2.36 jobs created and 2.45 jobs retained. For Guidance Counselors staff, 0.70 jobs created and 0.00 jobs retained. For School Nurse/Health Services staff, 0.86 jobs created and 0.00 jobs retained. For Maintenance Personnel staff, 0.00 jobs created and 0.00 jobs retained. For Technical/Computer Specialists staff, 2.00 jobs created and 1.00 jobs retained. For Library/Media staff, 0.00 jobs created and 0.00 jobs retained. For Food Services staff, 0.00 jobs created and 0.00 jobs retained. For Athletics/Coaches staff, 0.00 jobs created and 0.00 jobs retained. For Class Advisors staff, 0.00 jobs created and 0.00 jobs retained. For All Outside Consultants and Vendors except for RESCs and SERC staff, 9.60 jobs created and 7.53 jobs retained. Less Than 50% Completed EDUCATION, DELAWARE DEPARTMENT OF Title I Grants to LEA, Awards granted in order for LEAs to maintain Title I services and retain instructional staff to provide those Title I services.. Funding used to increase the number of services availabale to Title I Students including retaining Title I teachers to continue Title I services and provide additional services. Funding used to increase the number of services availabale to Title I Students including retaining Title I teachers to continue Title I services and provide additional services. Less Than 50% Completed Title I, Part A: Grants to Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards The Title I, Part A program provides financial assistance to LEAs and schools with high numbers or high percentages of poor children to help ensure that all children meet challenging state academic standards. Recovery Act funds create new opportunities for educators to implement innovative strategies in Title I schools that improve education for at- risk students and close achievement gaps while also stimulating the economy. Jobs created or retained include instructional and support services staff. Place of performance (city, state, zip code) Washington, District of Columbia 200020000 Less Than 50% Completed EDUCATION, FLORIDA DEPARTMENT OF Title I, Part A--Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards. Types of jobs included but were not limited to classroom teachers, instructional aides, school-based administrators, clerical support, librarians/media specialists, supervisors, guidance counselors, social workers, psychologists, and instructional district-based administrators. Place of performance (city, state, zip code) Tallahassee, Florida 323990400 Less Than 50% Completed IDAHO STATE BOARD OF EDUCATION Title I, Part A -- Improving Basic Programs Operated by Local Educational Agencies. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards. 82.83% Teachers/Teacher Aides, 4.94% School/District Administration/Office Support, 3.71% Tutors/Substitutes, 1.61% State Administration/Office Support, 1.55% Instructional Improvement Coaches, 1.12% Educational Media Workers, 1.07% Behavior Specialists, 0.85% Reading Coaches, 0.78% After School Program, 0.50% Professional Development, 0.38% Technology Specialist, 0.15% Computer Lab Technicians, 0.13% Math Intervention Specialist, 0.12% Testing Facilitator, 0.11% Social Worker, 0.07% Program Review Contractors, 0.05% Counselors, 0.02% Library Technicians, 0.01% Programmer for Data Collection. Place of performance (city, state, zip code) Boise, Idaho 837200027 Less Than 50% Completed EDUCATION, ILLINOIS STATE BOARD OF Title I, Part A -- Improving Basic Programs Operated by Local Education Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Education, Training and Library Occupations, Management Occupations, Computer and Mathmatical Occupations, Community and Social Service Occupations, Health Practitioners and Technical Occupations, Office and Administrative Support Occupations. Personal Care and Service Occupations. Place of performance (city, state, zip code) Springfield, Illinois 627770002 Less Than 50% Completed EDUCATION, INDIANA DEPARTMENT OF Title I Part A-Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards. At risk intervention teachers and aides. Instructional coaches for professional development. Place of performance (city, state, zip code) Less Than 50% Completed EDUCATION, IOWA DEPARTMENT OF Title I - Basic LEA Grants Funding to school districts to support struggling readers (consistent with regular Title I programming) Expansion of Title I basic grants intended to support students struggling with reading and math. Title I, Part A --Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Place of performance (city, state, zip code) Place of performance (city, state, zip code) Augusta, Maine 043330023 More than 50% Completed DEPARTMENT OF ELEMENTARY AND SECONDARY EDUCATION Title I, Part A - Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Provide educational services to students most at risk of failing to meet academic standards. Remaining funds will be expended by school districts as needed to supplement existing Title I funds. Title I teachers, paraprofessionals, and support staff members were hired or retained. Place of performance (city, state, zip code) MALDEN, Massachusetts 021484906 Less Than 50% Completed Title I, Part A--Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. The following jobs were created and retained with ARRA Title I, Part A funds: Academic Counselors, Aides/Paraprofessionals, Classroom/Instructional Interventionists, Early Childhood Intervention Specialists, Instructional Coaches, Instructional Specialists Program Coordinators, Reading Recovery Teachers, Social Workers, Substitute Teachers Summer School Teachers, Teachers, and Tutors. EDUCATION, MINNESOTA, DEPARTMENT OF Title1-PartA-Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Types of jobs created or retained with this grant include administration/supervision, counselor, cultural liaison, licensed instructional support, mental health professional, non- licensed classroom personnel, non-licensed instructional support, other, paraprofessional, physical/occupational therapist, substitute teacher salaries, and teachers. Place of performance (city, state, zip code) Roseville, Minnesota 551134266 Less Than 50% Completed MISSISSIPPI STATE DEPARTMENT OF EDUCATION Title I, Part A - Improving Basic Programs Operated by Local Education Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards. The jobs created/retained with Title I, Part A ARRA funds include instructional and non- instructional positions which all directly impact increasing the academic achievement of at- risk populations. Instructional positions include teachers, paraprofessionals, speech therapists, interventionists, in-school and after school tutors. Non-instructional positions include guidance counselors, social workers, security officers, and library/media specialists. ELEMENTARY AND SECONDARY EDUCATION, MISSOURI DEPARTMENT OF Title I, PartA--Improving Basic Programs Operated by Local Educational Agencies. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards. Place of performance (city, state, zip code) Jefferson City, Missouri 651012901 More than 50% Completed PUBLIC INSTRUCTION, MONTANA OFFICE OF Title I, Part A -- Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Public Elementary and Secondary school subgrantees continued their school year projects. Funding is being distributed based on subrecipients' monthly cash requests and reporting. Jobs related to the provision of educational services in public elementary and secondary schools under Title I, Part A Improving Basic Programs Operated by Local Educational Agencies, Recovery Act. EDUCATION, NEVADA DEPARTMENT OF Title I, Part A-Improving Basic Programs Operated by Local Education Agencies. Improve teaching and learning for students most at risk of failing to meet State academic acchievement standards Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards 230.49 Teaching jobs and 93.2 Teachers Aid Jobs were paid with ARRA funds. Place of performance (city, state, zip code) Carson City, Nevada 897015096 Less Than 50% Completed EDUCATION, NEW JERSEY DEPARTMENT OF Title I, Part A--Improving Basic Programs Operated by Local Educational Agencies. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards. A total of 794.2 jobs were created or retained. Of those, 577.5 were instructional positions, 62.4 were direct student support services positions, 40.0 were administrative positions and 114.3 did not indicate a job classification. We provide Title 1 funds on a reimbursement basis, and therefore it is not unusual for LEAs to report jobs created or retained prior to actually receiving the funds. Place of performance (city, state, zip code) Trenton, New Jersey 086250500 Less Than 50% Completed 291 NEW MEXICO EDUCATION, DEPARTMENT OF Title I, Part A--Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards. New Mexico’s public school districts and charter schools reported 145.08 positions for the Title I Grant. The positions created/retained are teachers, educational assistants, curriculum coaches, subject matter specialists, data specialists, counselors, and school nurses. Place of performance (city, state, zip code) SANTA FE, New Mexico 875012744 Less Than 50% Completed PUBLIC INSTRUCTION, NORTH CAROLINA DEPARTMENT OF Title I, Part A--Improving Basic programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards 2789.27 Director and/or Supervisor (113) Person assigned to direct or supervise staff members, a function, a program, or a support service.Assistant Principal (116) Person, licensed as an assistant principal, who has been designated by a local board of education to perform the duties of a non-teaching assistant principal. Teacher (121)Person certified to teach the standard course of studies and assigned to instruct pupils not classified elsewhere New Teacher Orientation (125) Person attending assigned new teacher orientation, outside of the teacher's contract calendar, not to exceed 3 days.Re-employed Retired Teacher - Exempt from the Earnings Cap (128)Retired teachers hired back into the classroom.Instructional Support I (131)Person assigned duties that require a high degree of knowledge and skills, in support of the instructional program. Duties include health services, attendance counseling, guidance services, media services, and nurses.Instructional Support II (132)Person assigned duties that require a high degree of knowledge and skills which place them on the advanced pay scale. Includes speech and audiologists Psychologist (133)Person assigned to perform duties involving psychology.Teacher Mentor (134) Individuals who are employed to serve as full-time mentors to teachers only.Lead Teacher (135)Includes curriculum specialists, instructional facilitators, as well as lead teachers in the summer school program. Teacher Assistant (141)Person assigned to assist with students in roles without the extra education required for NCLB. Examples include personal care assistants and physical therapy assistants.Teacher Assistant – NCLB (142) Person assigned to perform the day-to- day activities of assisting the regular classroom teacher, in roles requiring the extra education of NCLB.Tutor (Within the instructional day) (143) Person assigned to perform tutorial duties. Interpreter, Braillist, Translator, Education Interpreter (144) Person assigned to perform the activities of an interpreter, brail, translator, or education interpreter, and their assistants.Therapist (145) Person assigned to perform the activities of physical or occupational therapy. Includes the positions of physical therapist, occupational therapist.Specialist (School-Based) (146) Person assigned to perform technical activities in a support capacity such as data collection, compiling research data, preparing statistical reports, technology and other technical duties. Includes the positions such as certified nurses, computer lab assistants, technology assistants, CTE tech assistants, and behavioral modification techs, parent liaisons, and home school coordinators.Monitor (147)Person assigned to perform the activities of a monitor - bus monitors, lunchroom monitors, and playground monitors. Office Support (151)Person assigned to perform activities concerned with preparing, transferring, transcribing, systemizing, or filing written communications and records. Includes secretary, accounting personnel, admin assistant, photocopy clerk, file clerk, NCWise specialist, clerical specialist in a central office role, cost clerk, and school- based office personnel.Administrative Specialist (Central Support) (153)Person assigned to perform activities concerned with the administrative specialties of a school system. Includes internal auditor, budget specialist, administrative support, HR specialist, public relation personnel, energy and safety monitor, central office specialist, nutritional specialist, and specialists who manage a program area Driver (171)Person whose assignment consists primarily of driving a vehicle, such as a bus, truck, or automobile.Custodian (173) Person assigned to perform plant housekeeping and operating heating, ventilating, and air conditioning systems. Manager (176) Person assigned to direct the day-to-day operations of a group of skilled, semi-skilled, or unskilled workers. Examples would include child nutrition manager, and maintenance foreman. Place of performance (city, state, zip code) Bismarck, North Dakota 585050440 Less Than 50% Completed Title I - Grants to LEAs, Recovery Act Title I Grants to Local Educational Agencies, Recovery Act Title I, Part A funds are distributed to school districts based on four distinct funding formulas as affected by census poverty data. Districts determine which eligible buildings are to participate based on federal requirements. Targeted Assistance buildings must direct services to specific students. Schoolwide building may use the funds for more schoolwide activities intended to improve outcomes across the building. Purpose: To provide supplemental funding to economically disadvantaged districts and some of their elegible schools for improving educational outcomes for students. Building projects are either Targeted Assistance whereby students to be served are selected based on academic needs or school-wide whereby an improvement plan can be focused on any or all students. Place of performance (city, state, zip code) Oklahoma City, Oklahoma 731054503 More than 50% Completed Title I, Part A--Improving Basic Programs Operated by Local Educational Agencies. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. The jobs created or retained with these Recovery Act funds include K-12 teachers, instructional assistants, and mentor positions for new teachers. Teaching positions focus on reading and math. Of the total number of jobs reported, over 50% are newly created positions. Place of performance (city, state, zip code) Less Than 50% Completed 171 EDUCATION, PENNSYLVANIA DEPT OF TITLE I, PART A--IMPROVING BASIC PROGRAMS OPERATED BY LOCAL Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards Reflects sub-recipient submitted information on educators and other support staff providing services detailed in the Project Description for the current reporting quarter for this award. More than 50% Completed DEPARTMENT OF EDUCATION, SOUTH CAROLINA Title I, Part A--Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards. Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards. Pre-K Teacher, Kindergarten Teachers, Special Education (self -contained), Special Ed (resource), Classroom Teacher, Retired Teacher, Media Specialist, Guidance, Other Professional Instructional Oriented, Extended Day Teacher, Title I Director, School Nurse, Social Worker, Clerical Support, Teacher Leader, Coordinators, Administrator, Title I Instructional Paraprofessionals, Child Development Aide, Instructional Assistants, Instructional Aides, Instructional Coach, Other Aides, Principal, Assistant Principal, Computer Technician, Supervisor, Support Personnel, Kindergarten Aide, School Food Service Worker, School Logistical Support Staff, Curriculum/Academic Specialist, Interventionist, ESOL Part- Time teacher, Short-Term Substitutes, Consultant, School Parent Facilitators Columbia, South Carolina 292013730 Less Than 50% Completed Title I Part A--Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Teacher and paraprofessional positions were created to improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Pierre, South Dakota 575012291 Less Than 50% Completed EDUCATION, TENNESSEE DEPARTMENT OF Title I Grants to Local Educational Agencies, Recovery Act Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Teachers, paraprofessionals, instructional facilitators, parent involvement coordinators, guidance counselors, resource specialists, tech coaches, clerical, and other educational specialists. Place of performance (city, state, zip code) Nashville, Tennessee 372431219 Less Than 50% Completed Title I, Part A--Improving Basic Programs Operated by Local Educational Agencies. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. The positions created or retained during this period included professional jobs as well as positions for support staff. The major job categories include counselors, teachers, educational aides and administrators. Place of performance (city, state, zip code) AUSTIN, Texas 787011402 Less Than 50% Completed Title I , Part A - Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. Place of performance (city, state, zip code) Montpelier, Vermont 056202501 Less Than 50% Completed EDUCATION, VIRGINIA DEPARTMENT OF Title I Grants to Local Educational Agengies, Recovery Act Title I Grants to Local Educational Agengies, Recovery Act To help local education agencies (LEAs) and schools improve the teaching and learning of children failing, or most at-risk of failing, to meet challenging State academic standards. Improve teaching and learning for students most at risk of failing to meet State Academic Achievement Standards. Jobs accounted for during the quarter ended 3/31/2010 represent employment types such as: teachers, paraprofessionals, literacy coaches, reading specialist, math specialists, intervention specialist, aids, and resource professionals. This total is made up of 333.6 saved positions and 245.5 created positions. Place of performance (city, state, zip code) Richmond, Virginia 232193673 Less Than 50% Completed PUBLIC INSTRUCTION, WASHINGTON STATE SUPERINTENDENT OF Title I, Part A - Improving Basic Programs Operated by Local Educational Agencies. Improve teaching and learning for students most at risk of failing to meet state academic achievement standards. Improve teaching and learning for students most at risk of failing to meet state academic achievement standards. Place of performance (city, state, zip code) Olympia, Washington 985047200 Less Than 50% Completed 286 WEST VIRGINIA DEPARTMENT OF EDUCATION Improve teaching and learning for students most at risk of failing to meet the State academic achievement standards. Place of performance (city, state, zip code) Charleston, West Virginia 253050330 Less Than 50% Completed PUBLIC INSTRUCTION, WISCONSIN DEPT OF Title I, Part A--Improving Basic Programs Operated by Local Educational Agencies Improve teaching and learning for students most at risk of failing to meet State academic achievement standards. 353.70 jobs were reported in this quarter. These positions include: math and reading literacy coaches; math support teacher; substitute teachers; literacy specialist/coach; reading teachers and specialists; day program teachers; counselors; stimulus projects coordinator; literacy support teachers; teacher; administrative assistant; instructor; Director of Learning & Reform; Title I coordinator; English teacher; parent assistant; paraprofessionals; clerical staff; intermediate literacy support coach; direct instruction specialist; site specific school improvement; teacher mentor; RTI coordinator; Title I inclusion teacher; Title I teachers; speech pathologist; aide; social worker; behavioral specialist; academic intervention specialist; secretarial support; academic support; remediation specialist; teacher assistant; Title I paraprofessional; home visitors; speaker/presenter/trainer; reading/literacy consultant; curriculum consultant; data consultant; math consultant; IT staff; principal; new leader advanced placement; early childhood workers; at-risk teacher; curriculum development coordinator; Title I teacher 316- License; remediation skills coordinator; reading coordinator; reading recovery teacher; kindergarten assistance; math resource teacher; parent involvement coordinator; Title I family coordinator; literacy coordinator; family outreach coordinator; resource teacher (preschool, elementary); secondary reading support teachers; homeless community liaison; research analyst; goal aide; solutions coordinator; dean of students; music teacher; student success coordinator; art therapist; AutoSkill coordinator; homework club staff; hearing interpreter; electronic sub; afterschool program tutor and administrator; expended day coordinator, secretary, clerk and extended day staff; school family liaison; ELL teacher; data analysis coach; program managers; parent presenters; mentors; tutors; accounting staff; education consultant; SIFI/AYP coordinator; dual language immersion teacher; culturally relevant teacher; private/parochial professional development teacher; after school professional development coordinator; after school program coordinator; parent involvement teachers; reading consultant; science teacher; interventionists; family coordinator; staff development specialist; food service support; bilingual resource specialist- Saturday program; learning facilitator; librarian; AmeriCorps workers; and ARRA administration coordinator. The State Fiscal Stabilization Fund included approximately $48.6 billion to award to states by formula and up to $5 billion to award to states as competitive grants. The Recovery Act created the SFSF in part to help state and local governments stabilize their budgets by minimizing budgetary cuts in education and other essential government services, such as public safety. Stabilization funds for education distributed under the Recovery Act must first be used to alleviate shortfalls in state support for education to local educational agencies and public institutions of higher education. States must use 81.8 percent of their SFSF formula grant funds to support education (these funds are referred to as education stabilization funds) and must use the remaining 18.2 percent for public safety and other government services, which may include education (these funds are referred to as government services funds). After maintaining state support for education at fiscal year 2006 levels, states must use education stabilization funds to restore state funding to the greater of fiscal year 2008 or 2009 levels for state support to LEAs and public IHEs. When distributing these funds to LEAs, states must use their primary education funding formula, but they can determine how to allocate funds to public IHEs. In general, LEAs maintain broad discretion in how they can use education stabilization funds, but states have some ability to direct IHEs in how to use these funds. Given that few descriptions fully met our transparency criteria, we administered a web-based survey to school district superintendents in the 50 states and the District of Columbia to determine how they are using Recovery Act funds. We conducted our survey between March and April 2010, with a 78 percent final weighted response rate. We selected a stratified random sample of 575 LEAs from the population of 16,065 LEAs included in our sample frame of data obtained from the Common Core of Data in 2007-2008. Of this sample, we randomly selected 150 LEAs (50 for each program) to gather illustrative information on how they used their Recovery Act funds. See appendix VII for more information on how we designed our survey. What follows are summaries of how these LEAs described their use of Recovery Act SFSF funds, based on their survey responses as well as information we collected through follow-up communications. Acton-Boxborough Regional School District Acton, MA 01720 Award amount: $1,366,907 Acton-Boxborough Regional School District reported that it used its Recovery Act SFSF award to address special needs of its students and teachers. These funds covered the two schools in the district—the Raymond Grey Junior High and the High School. Specifically, the funds were used for paraprofessional staff retention, for teachers’ health insurance, and for special education out-of-district tuition. As a result of these funds, officials reported that the district was able to retain approximately eight paraprofessionals and recover two special education assistants. Therefore, officials reported that the district could maintain its student-teacher ratios for special education and other classes and allow the district to remain in compliance with Massachusetts regulations that require special education assistant teachers for every nine special education students. They also said these funds resulted in the district being able to pay for staff members’ health insurance and tuition for four out-of- district students (which totaled $220,670). Officials indicated that their Recovery Act SFSF award activities were less than 50 percent completed. Anchorage School District Anchorage, AK 99504 Award amount: $23,231,318 Anchorage School District reported that it used its Recovery Act SFSF award to enhance existing effective programs; implement innovative new programs; and ensure a safe learning environment with modern, efficient, and functional technology. Not including substitute teachers hired because of the impact of ARRA-funded professional development, or parents and families affected by ARRA-funded programs, the SFSF award covered 4,969 teachers, teacher aides, administrators, and staff, and 47,089 students in a range of schools and programs within the district. Specifically, the funds were used to retain and hire staff, provide professional development for instructional staff, purchase instructional materials, support preschool and summer school programs, enhance parent involvement activities, purchase or upgrade computer technology (hardware, software, servers, and systems), conduct student assessments and internal program evaluations, replace failing equipment, and implement building system renewals. Anchorage School District selected projects that would continue its ongoing work to improve scores on standardized tests, increase graduation rates, decrease student dropout rates, and prepare students for college and careers following graduation, all in a safe learning environment. District officials indicated that their Recovery Act SFSF award activities were less than 50 percent completed. Arcadia Unified School District Arcadia, CA 91007 Award amount: $3,294,536 Arcadia Unified School District reported that it used its Recovery Act SFSF award to backfill staff reductions caused by state budget cuts, thereby ensuring student progress by maintaining the district’s standard of an approximate 30-to-1 student-teacher ratio and providing the necessary programs to meet student needs. These funds covered all 10 schools in the district that serve approximately 10,000 students. Specifically, the funds were used to retain staff and support the district’s Response-to- Intervention program, its Walk-to-Read program, and before-school intervention in math and language arts. As a result of the SFSF award, officials reported that the district was able to retain approximately 20 of its 450 instructional positions and continue programs that meet the needs of its students, including those with special needs. They also said that these funds had the indirect result of allowing the district’s student-teacher ratio in grades K-3 to remain at approximately 20-to-1. Officials indicated that their Recovery Act SFSF award activities were less than 50 percent completed. Atlanta Public Schools Atlanta, GA 30303 Award amount: $14,536,203 Atlanta Public Schools reported that it used its Recovery Act SFSF award to save and retain instructional and noninstructional jobs. These funds benefited the district’s 107 schools and student population of approximately 47,000. Specifically, the district reported that the funds are being used to retain jobs that would have been lost because of a decrease in funding. As a result of the award, officials reported that Atlanta Public Schools saved over 440 jobs, which allowed class sizes to remain the same and support personnel to continue providing high levels of instruction with little or no distraction. Furthermore, the district reported that it anticipates that the additional funds will significantly assist the district with maintaining and expanding instructional reform efforts that focus on building capacity. Officials also anticipate that student achievement will be affected in a positive manner, as will standardized test scores, high school graduation rates, and teacher and principal effectiveness. District officials indicated that their Recovery Act SFSF award activities were less than 50 percent completed. Broward County Public Schools Fort Lauderdale, FL 33301 Award amount: $91,104,960 Broward County Public Schools reported that it used its Recovery Act SFSF award to save as many jobs as possible, which primarily included teaching positions and school support positions. These funds covered 234 schools. As a result of these funds, officials reported that the district was able to save over 1,400 jobs. Officials indicated that their Recovery Act SFSF award activities were fully completed. Burton Elementary Porterville, CA 93257 Award amount: $1,224,856 Burton Elementary reported that it used its Recovery Act SFSF award to save staff positions, including classified positions for those who work as classroom aides, librarians, and office clerks; maintain professional development and student programs; and purchase instructional materials. These funds covered approximately 3,800 students in seven schools. Specifically, the funds were used to pay staff salaries, and thus maintain low class sizes and programs such as art, music, libraries, and physical education. In addition, Burton Elementary used the funds to maintain staff development in order to help its teachers become better leaders and give them the necessary resources to help their students be successful. As a result of these funds, officials reported that the district was able to save 15 instructional positions and 5.25 classified positions and maintain its current student-teacher ratio of 20 to 1 in its lower grades. They also said that the funds will result in the district achieving higher levels of student success, improving scores on standardized tests, exiting program improvement status, and maintaining the district’s purpose and goals. Officials indicated that their Recovery Act SFSF award activities were more than 50 percent completed. Central Union High School District El Centro, CA 92243 Award amount: $1,895,213 Central Union High School District reported that it used its Recovery Act SFSF award to continue academic counseling and provide development for certified staff. The funds covered 10.6 full-time-equivalent counselors who advised 4,000 students in two comprehensive and one continuation high school. In addition, the district used the funds to provide 3 full days and 12 half days of staff development to 225 teachers. Specifically, the funds were used to continue previous allocation of staff development time, collaboration, and standards-based assessment. In addition, the funds were used to focus on a formative instructional methodology called Assessment for Learning, which uses information from a variety of sources to inform pedagogical decisions. As a result of these funds, officials reported that the district was able to maintain existing levels of academic advisement and its previous commitment to Assessment for Learning and standards-based initiatives. They also said that these funds resulted in retaining existing counseling staff and the services they were providing. Officials indicated that their Recovery Act SFSF award activities were 50 percent or more completed. Cheney Unified School District #268 Cheney, KS 67025 Award amount: $493,548 Cheney Unified School District #268 reported that it used its Recovery Act SFSF award to maintain its educational system at the current level to ensure student progress. These funds covered the three schools in this district that serves 775 students. Specifically, the funds were used to save certified and classified staff positions and maintain a desired student- teacher ratio. As a result of these funds, officials reported that the district was able to maintain a 20-to-1 student-teacher ratio at its elementary school and a 24-27-to-1 ratios at the middle and high schools. They also said that these funds resulted in the district saving between three and five positions, including a part-time math position, which kept math classes from having more than 30 students. Officials reported that their Recovery Act SFSF award activities were more than 50 percent completed. Chester School District Deep River, CT 06417 Award amount: $61,222 Chester School District reported that it used its Recovery Act SFSF award to retain teachers and to provide them with professional development in teaching strategies and data analysis. These funds covered teachers in all three elementary schools in the district. Specifically, the funds were used to provide consultation services through Performance Pathways, which is a technical tool that uses student data such as regular, standardized, and benchmark testing to inform decisions about changes in students’ academic programs. As a result of the SFSF award, officials reported that the district expects to see improved scores on standardized tests as well as improved strategies using data-driven decisions in the classroom. They indicated that their Recovery Act SFSF award activities were less than 50 percent completed. Clayton County Public Schools Jonesboro, GA 30236 Award amount: $23,144,036 Clayton County Public Schools reported that it used its Recovery Act SFSF award to retain personnel. These funds supported approximately 312 personnel at 62 schools. Specifically, the funds were used for salaries of teachers across all grade levels and subject areas (except for vocational and special education). As a result of these funds, officials reported that the district was able to retain approximately 312 personnel. They indicated that their Recovery Act SFSF award activities were less than 50 percent completed. Coeur d’Alene District Coeur d’Alene, ID 83814 Award amount: $4,182,019 Coeur d’Alene District reported that it used its Recovery Act SFSF award to support existing employee salaries and benefits for a month. These funds supported 17 schools and approximately 1,100 employees working in maintenance, transportation, and administrative offices. Specifically, the funds were used to help offset the cost of existing staff so that their health insurance benefits could be maintained and so that further cuts to existing programs would not be made. As a result of these funds, officials reported that the district was able to save extracurricular programs at the schools and health benefits for employees, but did not necessarily save any positions. They indicated that their Recovery Act SFSF award activities were less than 50 percent completed. Creighton Elementary District Phoenix, AZ 85016 Award amount: $2,275,658 Creighton Elementary District reported that it used its Recovery Act SFSF award to maintain class sizes at levels prior to those of 2010 and retain teaching staff. These funds provided support to approximately 7,400 students in nine schools and approximately 500 teachers and other staff members. Specifically, the funds mitigated losses in state funding by paying for staff salaries. As a result of the SFSF award, officials reported that the district was able to maintain student-teacher ratios at pre 2009-2010, levels that were a maximum of 27 to 1 for grades K-3 and 32 to 1 for grades 4-8. Officials reported that maintaining these student-teachers ratios will ensure that students receive meaningful instructional opportunities. They indicated that their Recovery Act SFSF award activities were 50 percent or more completed. Crossroads Charter High School Charlotte, NC 28213 Award amount: $61,050 Crossroads Charter High School reported that it used its Recovery Act SFSF award to hire and retain teachers, paraprofessionals, and contractors; to enhance technology for computer-based instruction and school safety, and to purchase educational supplies and materials. These funds covered Crossroads Charter High School’s one site that serves 271 students, has 16 teachers, and employs a host of paraprofessionals and contractors. Specifically, the funds were used for providing staff development for teachers and administrators, purchasing computers and safety equipment, and allowing for college and career readiness tours. As a result of these funds, officials reported that the school was able to save five positions, create three positions, and thus maintain its student-teacher ratio of 20 to 1. School officials said they also hope that the funds will facilitate increased graduation rates and improved high-stakes test scores from the concentrated staff and career exploration activities. They indicated that their Recovery Act SFSF award activities were less than 50 percent completed. E-Cademie, A Charter School Phoenix, AZ 85006 Award amount: $123,381 E-Cademie, A Charter School reported that it used its Recovery Act SFSF award to pay for monthly maintenance and operations expenses so the school could keep its doors open after the state of Arizona cut its regular payments for October, November, May, and June. These funds covered approximately 170 students, 10 teaching staff, and 6 support staff. As a result of the SFSF award, officials reported that the school was able to pay its staff and rent, thus preventing it from going into massive debt or closing down. They indicated that their Recovery Act SFSF award activities were fully completed. Fairfax County Public Schools Falls Church, VA 22042 Award amount: $37,426,150 Fairfax County Public Schools reported that it used its Recovery Act SFSF award to avoid further increases in general education class size by retaining an average of 1.5 teachers per school in approximately 189 schools. As a result of these funds, officials reported that the district was able to retain approximately 276 classroom teachers. They indicated that their Recovery Act SFSF award activities were 50 percent or more completed. Flathead High School Kalispell, MT 59901 Award amount: $893,761 Flathead High School reported that it used its Recovery Act SFSF award to retain and hire staff, pay for professional development, and purchase instructional materials. These funds covered six schools, which included elementary, middle, and high schools. Specifically, the funds were used to maintain its current level of staffing for at-risk students and to maintain tutors for those students without using other funds. As a result of these funds, officials reported that the district was able to maintain its 20-to-1 student-teacher ratio for its special education classes. They indicated that their Recovery Act SFSF award activities were more than 50 percent completed. Forsyth County Schools Winston Salem, NC 27103 Award amount: $13,621,983 Forsyth County Schools reported that it used its Recovery Act SFSF award for offsetting pay for noninstructional personnel to replace the loss of state funds, as dictated by the North Carolina General Assembly. These funds covered all noninstructional personnel in the district—specifically, clerical and custodial positions. As a result of these funds, officials reported that the district was able to save 389 positions, as the average salary and total benefits of each position is approximately $35,000. They indicated that their Recovery Act SFSF award activities were more than 50 percent completed. Fort Sam Houston Independent School District San Antonio, TX 78234 Award amount: $843,721 Fort Sam Houston Independent School District reported that it used its Recovery Act SFSF award to purchase technology infrastructure, hardware, software, and training for staff. These funds covered approximately 1,500 students in three schools—an elementary, middle, and high school. Specifically, they were used to purchase network servers, mounting racks, and catalyst SmartNet power supplies. As a result of these funds, officials reported that the district was able to upgrade its technological infrastructure, hardware, and software for its staff and students. As a result of enhancements to the district’s infrastructure, teachers and students have more access to the latest technology for general classroom instruction. Officials indicated that their Recovery Act SFSF award activities were more than 50 percent completed. Hooksett School District Hooksett, NH 03106 Award amount: $426,184 Hooksett School District reported that it used its Recovery Act SFSF award to move more students with educational disabilities from special education classrooms into the general education classrooms by hiring new staff, providing professional development, and purchasing technology and instructional materials that target the special education population. These funds supported five schools, with a combined population of about 1,275 students. As a result of these funds, officials reported that the district was able to improve instructional practices resulting in increased student achievement. They indicated that their Recovery Act SFSF award activities were less than 50 percent completed. Houston Heights Learning Academy, Inc. Houston, TX 77007 Award amount: $20,267 Houston Heights Learning Academy, Inc., reported that it used its Recovery Act SFSF award to maintain two full-day prekindergarten programs for school readiness at one school. These funds targeted seven teachers and 129 students at the one school. Specifically, the funds were used for students who are in a lower economic bracket, have limited English proficiency, and need a full-day program to prepare them for school readiness. As a result of these funds, officials reported that they expect the school’s students will receive a strong foundation for academic achievement, which will eventually close gaps on standardized tests and improve graduation rates. In addition, the school will be able to retain two full-day prekindergarten teachers. Officials indicated that their Recovery Act SFSF award activities were 50 percent or more completed. Huron School District 02-2 Huron, SD 57350 Award amount: $920,254 Huron School District 02-2 reported that it used its Recovery Act SFSF award for general day-to-day operations of the district. These funds covered all 2,000 students in the district and supplanted South Dakota state aid. As a result of these funds, officials reported that the district was able to save 8 to 10 staff positions and maintain the same level of services offered in the prior year. They indicated that their Recovery Act SFSF award activities were fully completed. Integrity Education Corporation Scottsdale, AZ 85271 Award amount: $41,640 Integrity Education Corporation reported that it used its Recovery Act SFSF award to maintain its education program in the face of declining state funding. These funds covered one school consisting of 70 students. Specifically, the funds were used to retain staff and purchase instructional materials and kitchen equipment. As a result of this SFSF award, officials reported that the school was able to save one instructional position and improve scores on standardized tests. They indicated that their Recovery Act SFSF award activities were fully completed. Joshua Academy Evansville, IN 47713 Award amount: $126,496 Joshua Academy, a charter school, reported that it used its Recovery Act SFSF award to maintain normal operations after the state of Indiana substituted its regular funding with the award money. Thus, the academy received the same funding as usual, just from a different source. These funds covered the 22 teachers and 240 students at Joshua Academy. As a result of these funds, officials reported that the academy was able to continue operations as normal without undertaking additional budget cuts even though the state of Indiana is undergoing budget cuts. They indicated that their Recovery Act SFSF award activities were fully completed. Lee County School District Bishopville, SC 29010 Award amount: $796,651 Lee County School District reported that it used its Recovery Act SFSF award to save instructional positions. These funds primarily targeted 12 instructional positions and affected all schools in the district—including its elementary, middle, and high schools. Specifically, the funds were used to pay for the district’s utility bills and property, casualty, and worker’s compensation insurance premiums, which freed up state and local funds to pay for instructional staffs’ salaries and fringe benefits. As a result of these funds, officials reported that the district was able to save approximately 12 instructional positions by using state and local funds for the salaries, which have helped maintain class sizes in its elementary schools and helped the district continue offering assistance to its ESOL (English for Speakers of Other Languages) students at all locations. The funds also resulted in the district keeping programs such as AP (Advanced Placement) English, the teacher cadet program, art and music in its middle and high schools, and a vocational program directed at special needs students. Officials reported that their Recovery Act SFSF award activities were 50 percent or more completed. Liberty School District Roland, OK 74954 Award amount: $85,795 Liberty School District reported that it used its Recovery Act SFSF award to save and retain instructional positions. These funds covered all 325 students and 24 certified teachers in this K- 8 district. Specifically, the funds were used to save and retain third, fifth, and sixth grade instructional positions. As a result of these funds, officials reported that the district was able to save a total of three positions. They indicated that their Recovery Act SFSF award activities were more than 50 percent completed. Liberty-Eylau Independent School District Texarkana, TX 75501 Award amount: $971,887 Liberty-Eylau Independent School District reported that it used its Recovery Act SFSF award to provide the best education possible for its students by providing services and implementing programs. These funds covered all six campuses in the district that serves a total of 2,900 students. Specifically, the funds were used to hire and retain supplemental classrooms teachers and instructional aides, as well as a career and technology specialist for its vocational program. In addition, officials reported that the district used the funds to provide core subject professional development for teachers at a local service center, pay for substitute teachers so that new teachers could participate in a mentoring program, and purchase test preparation materials and several new computers and projectors. As a result of these funds, officials said the district was able to improve technology availability in the classroom and save or retain 10 to 12 positions, which the district hopes will improve scores on standardized tests. They indicated that their Recovery Act SFSF award activities were more than 50 percent completed. Life Skills Center-Middletown Middletown, OH 45042 Award amount: $164,378 Life Skills Center-Middletown reported that it used its Recovery Act SFSF award to hire and retain teachers despite budget cuts. These funds covered this dropout recovery high school and affected teachers in the school’s learning lab that serves 30 to 50 students per day. Specifically, the funds were used to serve all new students with the transition lab, which will prepare them for the classroom labs. As a result of these funds, officials reported that the school was able to retain three full-time equivalents and that they hope to increase retention, attendance, and student acclimation, thereby leading to increased graduation rates. In addition, they said they hope that the increased individualized attention will increase scores on students’ standardized tests. Officials indicated that their Recovery Act SFSF award activities were more than 50 percent completed. Lombard School District 44 Lombard, IL 60148 Award amount: $460,145 Lombard School District 44 reported that it used its Recovery Act SFSF award to construct a four-classroom addition at its Butterfield School. These funds will affect approximately 100 students and six schools, two directly and four indirectly. Specifically, the four-classroom addition will house the district’s early childhood and kindergarten readiness programs. As a result of these funds, officials reported that the district will be able to serve all of its early childhood and kindergarten readiness programs at one building with state-of-the-art facilities, which will alleviate overcrowding at their current location. Officials indicated that their Recovery Act SFSF award activities were less than 50 percent completed. Marietta City Schools Marietta, GA 30060 Award amount: $3,484,874 Marietta City Schools reported that it used its Recovery Act SFSF award for instructional personnel salaries and benefits to offset state funding reductions in accordance with directions from the state. Officials said it was not possible to say how many schools or students were affected. They reported that the funds were reclassified to compensate for funds the state could not provide because of a decline in state revenues. As a result of these SFSF funds, officials said that the district was able to save about 97 staff positions according to the state budget calculation. Officials indicated that their Recovery Act SFSF award activities were fully completed. Medical Center Charter School Houston, TX 77030 Award amount: $37,889 Medical Center Charter School reported that it used its Recovery Act SFSF award to increase special education services and increase teacher quality. These funds covered one campus with about 250 students in grades prekindergarten through sixth. Specifically, the funds were used for the early detection of learning disabilities and the expansion of all-day prekindergarten. In addition, the school used the funds for the implementation of new software, staff retention, professional development, and incentives. As a result of these funds, officials reported that the school was able to support eight positions and increase staff job satisfaction. They indicated that their Recovery Act SFSF award activities were 50 percent or more completed. Mobile County Public Schools Mobile, AL 36618 Award amount: $14,817,861 Mobile County Public Schools reported that it used its Recovery Act SFSF award to pay for teachers, which allowed the district to avoid reducing its number of teachers. In addition to retaining teacher positions, the funds were used for professional development (specific to their grade level and subject area) that allowed teachers to meet school system requirements. The award funds affected about 60,000 students and about 6,000 teachers in the district’s 89 schools. As a result of this SFSF award, officials reported that the district was able to maintain its 20-to-1 student-teacher ratio for grades K through 3, 24–to-1 ratio for grades 4 through 6, and 28–to-1 ratio for grades 7 through 12. They indicated that their Recovery Act SFSF award activities were 50 percent or more completed. Mount Vernon School District 17-3 Mount Vernon, SD 57363 Award amount: $133,960 Mount Vernon School District 17-3 reported that it used its Recovery Act SFSF award to supplant money from the state and that the funds were used for salaries. These funds covered all 240 students in the district, but officials indicated that it was not possible to say which positions would have been affected. As a result of these funds, officials reported that the district was able to save two positions. They indicated that their Recovery Act SFSF award activities were fully completed. Muscogee County School District Columbus, GA 31906 Award amount: $16,907,769 Muscogee County School District reported that it used its Recovery Act SFSF award for staff retention throughout the district. Specifically, these funds were used to retain elementary teachers, media specialists, paraprofessionals, clerks, and assistant principals. As a result of these funds, officials reported that the district was able to save 223.4 jobs. They indicated that their Recovery Act SFSF award activities were less than 50 percent completed. Newhall Elementary Valencia, CA 91355 Award amount: $2,206,649 Newhall Elementary reported that it used its Recovery Act SFSF award to retain teachers and maintain programs. These funds targeted 10 schools and affected approximately 758 students. Specifically, the funds enabled the district to enrich the learning experience in the primary grades by keeping class sizes low as part of the state’s class size reduction program in grades K-3. As a result of the SFSF award, officials reported that the district was able to retain 31 teachers, and thus maintain an average student/teacher ratio of 22 to 1 in grades K-3. They indicated that their Recovery Act SFSF award activities were 50 percent or more completed. North Merrick Union Free School District Merrick, NY 11566 Award amount: $675,135 North Merrick Union Free School District reported that it used its Recovery Act SFSF award to maintain a comprehensive educational program for both general and special education students, equitably supporting programs in each of its schools. These funds targeted each of the district’s three elementary schools, which serve approximately 1,320 students. Specifically, the district used the award predominantly to retain staff and provide ongoing professional development in support of important federal/state initiatives (e.g., Response to Intervention) and used a small portion of it to purchase educational technology in support of district initiatives. As a result of these funds, officials reported that the district expects to maintain important district educational programs and staff in the arts, music, library, and literacy; continue to demonstrate excellent student results on all educational assessments; and continue to meet the goals of the district technology plan, especially in terms of technology integration with instruction. Officials indicated that their Recovery Act SFSF award activities were 50 percent or more completed. Northwestern School Corporation Kokomo, IN 46901 Award amount: $943,353 Northwestern School Corporation reported that it used its Recovery Act SFSF award to retain current staffing as a substitute for its state tuition support. These funds covered about 115 teachers at four schools that have a total of about 1,650 students. As a result of these funds, officials reported that the district was able to maintain its level of teachers, its current academic program, and high test scores. They indicated that their Recovery Act SFSF award activities were fully completed. Pacific Elementary Davenport, CA 95017 Award amount: $39,724 Pacific Elementary reported that it used its Recovery Act SFSF award to maintain an intervention program for the lowest-performing students by retaining the program’s staff. These funds affected one position and covered 14 of the 101 students at Pacific Elementary, which is a single school district. Specifically, the funds were allocated for a reading specialist, funding 80 percent of the position in the 2009-2010 school year and allowing the district to retain the position for the 2009-2010 and 2010- 2011 school years, even though doing this will require spending down reserves. As a result of the SFSF award, officials reported that the specialist can continue providing significant interventions for students performing below grade level. According to officials, these learners are making academic progress based on a variety of assessments such as the Bader Reading and Language Assessment, the Lindamood Auditory Conceptualization Test, and the Comprehensive Test of Phonological Awareness. Officials indicated that their Recovery Act SFSF award activities were more than 50 percent completed. Pelham City School District Pelham City, GA 31779 Award amount: $915,617 Pelham City School District reported that it used its Recovery Act SFSF award to hire and retain instructional staff. The funds targeted three schools—one elementary, middle, and high school—that serve a total of approximately 1,425 students. Specifically, the funds were used to hire and retain paraprofessionals, full-time certified staff, and an instructional specialist. As a result of these funds, officials reported that the district was able to save six paraprofessionals, two full-time certified staff, and 25 percent of an instructional specialist’s position. They indicated that their Recovery Act SFSF award activities were 50 percent or more completed. Prince George’s County Public Schools Upper Marlboro, MD 20772 Award amount: $46,542,234 Prince George’s County Public Schools reported that it used its Recovery Act SFSF award to restore financial support to maintain its buildings in a manner that provides for a safe, healthy, and comfortable learning environment. These funds affected all of the district’s 127,000 students and 202 schools. Specifically, the funds were used to support districtwide fixed utility costs as an indirect way of continuing to build academic progress, maintain successful instructional programs, and fund the necessary resources to prepare students for state assessments. As a result of these funds, officials reported that the district was able to prevent districtwide employee furloughs, saving the district from a potential lost of 37 days across various employee classifications. Next, officials reported that the funds prevented the potential downgrade of activities and programs, such as the Advancement Via Individual Determination program--an instructional program designed to improve extended learning opportunities in the core subject areas. In addition, the SFSF award resulted in the district not increasing its student-teacher ratios of 22 to 1 in grades K through 2, 25 to 1 in grades 3 through 6, 30 to 1 in grades 7 through 8, and 20 to 1 in grades 9 through 12. Last, the funds allowed them to restore bus driver and bus attendant positions. District officials indicated that their Recovery Act SFSF award activities were 50 percent or more completed. Recovery School of Southern Minnesota Owatonna, MN 55060 Award amount: $16,823 Recovery School of Southern Minnesota reported that it used its Recovery Act SFSF award to provide instruction to students by retaining instructional staff. These funds covered one site that serves approximately 30 students. In particular, the funds were used, along with other funds, to retain a full-time general education/ special education teacher. According to school officials, these funds assisted the school with retaining one instructional position. They indicated that their Recovery Act SFSF award activities were more than 50 percent completed. San Bernardino City Unified School District San Bernardino, CA 92410 Award amount: $22,316,420 San Bernardino City Unified School District reported that it used its Recovery Act SFSF award to reduce layoffs due to budget cuts. These funds covered 44 elementary schools, which have a combined enrollment of approximately 25,175 students. Specifically, the funds were used to keep class sizes lower in grades K-3. As a result of the SFSF award, officials reported that the district was able to save 65 positions and maintain a class size of 21:1, rather than increasing to 22: 1 for grades K- 3. District officials reported that their Recovery Act SFSF award activities were less than 50 percent completed. Santa Clara County Office of Education San Jose, CA 95131 Award amount: $3,414,075 Santa Clara County Office of Education (SCCOE) reported that it used its Recovery Act SFSF award to maintain and augment its support to school districts, charter schools, regional occupation programs, and alternative education programs through creating and retaining staff positions. The SCCOE reported that the award was used to augment its support of 12,749 teachers for 261,945 students at 36 school districts (21 elementary, 6 unified, 5 high school, 4 community colleges) and 387 public school sites inclusive of 34 charter schools (239 elementary, 55 middle, 51 high school, 18 continuation, 10 alternative, 9 community day, 2 K-12, 1 special education, 1 juvenile hall, and 1 county community). Specifically, they said the funds were used to maintain and augment support for curriculum, instruction, assessment, accountability, career technology education, preschool services, school health services, and categorical programs. . As a result of their SFSF funds, officials reported that the district was able to create 17.1 positions, which was the equivalent of 14.45 FTEs. They indicated that their Recovery Act SFSF award activities were projected to be more than 50 percent completed by June 30, 2010. Sbe–The School of Arts and Enterprise Sacramento, CA 95814 Award amount: $187,919 Sbe–The School of Arts and Enterprise reported that it used its Recovery Act SFSF award to maintain standards by preventing layoffs. These layoffs would have occurred because the state reduced per student funding by $500, which would have meant a $200,000 reduction for the school. These funds covered all 400 students at the school by retaining staff and replacing employees from turnover. As a result of these funds, officials reported that the school was able to save three to four teacher positions, which allowed it to have a 20-to-1 student-teacher ratio rather than a 25-to-1 ratio that it would have had without the award funds. They indicated that their Recovery Act SFSF award activities were more than 50 percent completed. Terrebonne Parish School District Houma, LA 70360 Award amount: $2,659,177 Terrebonne Parish School District reported that it used its Recovery Act SFSF award to retain master teachers, to fund performance pay for employees at schools that meet the state-established growth target on the LEAP test, and to fund its summer school and remediation programs. The award helped fund performance pay for employees at the 12 district schools that met state growth targets, targeted 10 schools where master teachers serve, and provided partial funding for approximately 3,000 students in its summer program. Specifically, the funds covered summer school stipends, materials and transportation, and teachers’ performance pay stipend and benefits costs. As a result of their SFSF funds, officials reported that the school was able to retain 10 master teachers who assisted with curriculum and instruction. They also said that these funds resulted in the school continuing its summer school and remediation programs so that students who did not pass the LEAP test could have additional instructional time before they retake it. Officials indicated that their Recovery Act SFSF award activities were more than 50 percent completed. Tulelake Basin Joint Unified School District Tulelake, CA 96134 Award amount: $295,390 Tulelake Basin Joint Unified School District reported that it used its Recovery Act SFSF award to support the continued operation of its music program and staff retention. Specifically, the funds were used to rehire the music teacher and retain one teacher at the elementary school and another at the middle school. These funds affected a total of 536 students who took classes from the music teacher, 125 students at a K through 2 elementary school, 170 students at a 3 through 6 elementary school, and 241 students at the middle and high schools. As a result of these funds, officials reported that the district was able to maintain class sizes of approximately 20 students and save three instructional positions. These funds also allowed the district to keep its arts program in the schools. District officials indicated that their Recovery Act SFSF award activities were 50 percent or more completed. Valley View Elementary Polson, MT 59860 Award amount: $14,664 Valley View Elementary reported that it used its Recovery Act SFSF award to retain highly qualified teachers and an instructional aid at its 23-student school. In particular, the funds were used to retain the staff and pay for the cost of additional benefits, especially its health insurance costs. As a result of the SFSF award, the school anticipates it will be able to retain three staff, and it hired one instructional aide. School officials reported that their Recovery Act SFSF award activities than 50 percent completed. Vineland Public School District Vineland, NJ 08360 Award amount: $14,788,960 Vineland Public School District reported that it used its Recovery Act SFSF award to pay for employee health benefits that are a key part of the budget. District officials said that they decided to use the funds for health benefits because doing so allows them to charge as few items as possible to the SFSF award, thus enabling the greatest amount of transparency and taxpayer review. These funds covered health benefits for all of the district’s 431 administrative staff members and 352 high school staff members at the two campuses that do not receive Title I funds. Specifically, the funds were used for benefits of bus drivers, assistant elementary school principals, basic skills teachers, and other instructional and noninstructional positions. As a result of these funds, officials reported that the district was able to retain approximately 219.5 positions—specifically 61 bus drivers, 7 assistant elementary school principals, 41 basic skills teachers, and 110.5 other instructional and noninstructional positions throughout the district. They indicated that their Recovery Act award activities were more than 50 percent completed. Wake County Schools Raleigh, NC 27609 Award amount: $35,150,824 Wake County Schools reported that it used its Recovery Act SFSF award to offset a reduction in state funds for noninstructional support, which the North Carolina Department of Public Instruction reduced for the 2009-2010 and 2010-2011 school years. These funds covered all schools in the Wake County Public School System. Specifically, the funds were used to support custodial and clerical positions. As a result of these funds, officials reported that the district was able to continue to provide school-based clerical and custodial support and save an estimated 493 custodial and 423 clerical jobs. Specifically, they said it saved a total of 10,552.25 months of school-based employment and 48 months of employment in central services for a total of 10,600.25 months of employment per year. Officials indicated that their Recovery Act SFSF award activities were less than 50 percent completed. West Holmes Local School District Millerburg, OH 44654 Award amount: $908,249 West Holmes Local School District reported that it used its Recovery Act SFSF award to maintain purchased services that were previously state- funded. Because West Holmes Local School District is over 50 percent state-funded, it used the SFSF funds to offset what it had always received. These funds included general funding for an alternative school, virtual classroom, and community school; Internet services for the district; gifted education services; computer technician services; and district audit services. However, it is not possible to say exactly how many students or schools were affected. As a result of these funds, officials reported that the district was able to maintain its property, fleet, and liability insurance coverage and pay for audit-related and technology fees. These funds also allowed the district to save several jobs and maintain its current student- teacher ratio to help it achieve its goal of improved scores on standardized tests. They indicated that their Recovery Act SFSF award activities were less than 50 percent completed. Woodson Independent School District Woodson, TX 76491 Award amount: $46,884 Woodson Independent School District reported that it used its Recovery Act SFSF award to purchase hardware and software to improve, supplement, and expand instructional programs, including response-to- intervention and progress monitoring. These funds supported all students—120 total—in this K-12 district. As a result of these funds, officials reported that the district was able to retain staff. They also said they expect that student achievement will increase. Officials indicated that their Recovery Act SFSF award activities were 50 percent or more completed. The Recovery Act provided supplemental funding for programs authorized by the Individuals with Disabilities Education Act, as amended, the major federal statute that supports the provisions of early intervention and special education and related services for children, and youth with disabilities. Part B ($11.7 billion) provides funds to ensure that preschool and school-aged children with disabilities have access to a free and appropriate public education and is divided into two separate grant programs: Part B grants to states (for school-age children) and Part B preschool grants. Our review focused only on Part B grants to states for school aged children. Given that few descriptions fully met our transparency criteria, we administered a web-based survey to school district superintendents in the 50 states and the District of Columbia to determine how they are using Recovery Act funds. We conducted our survey between March and April 2010, with a 78 percent final weighted response rate. We selected a stratified random sample of 575 LEAs from the population of 16,065 LEAs included in our sample frame of data obtained from the Common Core of Data in 2007-2008. Of this sample, we randomly selected 150 LEAs (50 for each program) to gather illustrative information on how they used their Recovery Act funds. See appendix VII for more information on how we designed our survey. What follows are summaries of how these LEAs described their use of Recovery Act IDEA Part B funds, based on their survey responses as well as information we collected through follow-up communications. American Charter Schools Foundation d.b.a. Sun Valley High School Phoenix, AZ 85020 Award amount: $27,382 American Charter Schools Foundation D.B.A. Sun Valley High School reported that it used its Recovery Act IDEA award to improve scores on standardized tests, increase special education students’ access and understanding of general education curriculum, and enhance supports and instructional modifications for special education students in the inclusive setting. These funds served over 70 special education students. Specifically, the funds were used to hire a part-time special education coordinator to enhance supports and instructional modifications, purchase instructional materials, and provide related services for special education students such as speech, physical therapy, psychological, hearing and vision services. As a result of these IDEA funds, officials reported that the school was able to improve standardized test scores, improve of dropout and graduation rates, and increase understanding of and accessibility to general education curriculum. Officials indicated that their Recovery Act IDEA award activities were more than 50 percent completed. Arp Independent School District Arp, TX 75750 Award amount: $382,876 Arp Independent School District reported that it used its Recovery Act IDEA award to hire a special education teacher and instructional aide to work with students with emotional disabilities, purchase special education manager software, and purchase a bus for hearing-impaired students. These funds supported one campus and approximately 90 students. The funds were also used to create two new staff positions. As a result of these IDEA funds, officials reported that the district was able to transport students with hearing impairments more efficiently, individualize instruction to the needs of students with emotional disabilities, and cut down on referrals by identifying students with special needs. They indicated that their Recovery Act IDEA award activities were more than 50 percent completed. Biloxi Public School District Biloxi, MS 39530 Award amount: $1,165,859 Biloxi Public School District reported that it used its Recovery Act IDEA award to retain jobs and replace and upgrade technology for students with special needs. These funds supported 593 students with special needs across all 11 schools. Specifically, the funds were used to retain two examiners and two psychologists and purchase new computers and printers for student use. As a result of IDEA funds, officials reported that the district was able to ensure that students with disabilities receive assessment services and provide them more individualized assistance. The technology will allow the students to access the newer intervention software. District officials reported that their Recovery Act IDEA award activities were completed 50 percent or more. Blackstone Valley Regional Vocational Technical High School Upton, MA 01568 Award amount: $215,190 Blackstone Valley Regional Vocational Technical High School reported that it used its Recovery Act IDEA award to fund administrative stipends for two special education personnel. These funds supported the single regional school in the district, affecting the entire special education population of 140 students. Specifically, the funds were used to support a special education chair whose purpose is to carry out many aspects of administration of special education, and a special education Team Leader whose purpose is to improve coordination within the department and among the different disciplines. As a result of these funds, officials reported that the school was able to integrate academic and vocational studies, revise its curriculum with recommendations from state and federal agencies, and assist with an inclusion program for special education students. Officials also said that these funds also resulted in coordination of individualized education program (IEP) services and reevaluations, provision of liaisons with parents, and improvement of services to Special Education students. They indicated that their Recovery Act IDEA award activities were more than 50 percent completed. Bonham Independent School District Bonham, TX 75418 Award amount: $387,509 Bonham Independent School District reported that it used its Recovery Act IDEA award to purchase technology and instructional materials, provide professional development, and create one part-time position. Because students with special needs are included in the general education classroom, these funds affected all students in the district (approximately 2,000). Specifically, the funds were used to purchase technology and software for students with special needs, a special needs school bus, instructional materials, and additional technology for the classrooms. The award was also used for professional development for teachers working with students with special needs and to create one part-time social worker position. As a result of these IDEA funds, officials reported that the district was able to positively affect the ability of the teachers to improve their instructional techniques and increase student achievement. They also said that these funds resulted in better transportation of students with special needs so they can participate in school activities. Officials indicated that their Recovery Act IDEA award activities were more than 50 percent completed. Christina School District Wilmington, DE 19801 Award amount: $4,954,517 Christina School District reported that it used its Recovery Act IDEA award to support Coordinated Early Intervention Services for students with disabilities who have academic or behavioral issues, to supplement funds to secondary schools for extended day and extended year programs for students with disabilities, to provide professional development to staff working with students with disabilities, and to expand birth-to-five activities for parents and students. These funds supported about 22 schools and 16,000 students. Specifically, the funds were used to: expand birth to 3- year-old parent and child programs in high-need areas; provide afternoon preschool programs for 30 children, construct an academic support center at one high school to assist and enrich students at a variety of achievement levels, hire academic and behavior interventionists to support student needs, conduct training in research-based instructional practices, and conduct formal-third party reviews of all schools to gather baseline information on the school's performance, and create professional development plans for the staff and school leaders. As a result of these IDEA funds, district officials reported seeing a significant improvement in behavioral referrals this school year and expect student enrollment and retention rates to improve as well as improvement in academic achievement over time. Officials indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Colton-Pierrepont Central School District Colton, NY 13625 Award amount: $41,595 Colton-Pierrepont Central School District reported that it used its Recovery Act IDEA award to keep in place a Response to Intervention reading program, purchase materials for this program, and retain one position. These funds supported one school with approximately 330 students and were specifically used for both special education students and regular education students to help prevent their classification into special education. Specifically, the funds were used to keep the district’s co- teacher model working by employing a special education teacher, purchase teaching materials to update literacy programs, and provide staff with high- quality, research-based professional development. As a result of these funds, officials reported that the district was able to retain its small class sizes by not having to reduce teaching staff. They also said that these funds resulted in the reading groups remaining small with the assistance of reading specialists to provide the best literacy instruction they need. Officials reported that their Recovery Act IDEA award activities were 50 percent or more completed. DeKalb County School System Decatur, GA 30032 Award amount: $19,669,324 DeKalb County School System reported that it used its Recovery Act IDEA award to increase the achievement of students with disabilities. These funds affected roughly 20 high schools and 20 middle schools. Specifically, the funds were used to retain staff, hire additional board certified behavior analysts to support schools as needed, fund special education paraprofessionals, and hire lead teachers for special education to provide support to elementary schools. The funds were also used to provide professional development, provide personnel to supply ongoing coaching and support to school staff, and purchase equipment. As a result of these funds, officials reported that the district was able to improve the achievement of students with disabilities and provide elementary schools with more time with their existing lead teachers for special education. In addition, they said that the district was able to fund special education paraprofessionals who were previously paid through local dollars. Officials indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Eastern York School District Wrightsville, PA 17368 Award amount: $310,132 Eastern York School District reported that it used its Recovery Act IDEA award to provide services and mental health/behavioral counseling to students with disabilities as well as professional development to staff. These funds supported 35 schools across York County and 111 students. Specifically, the funds were used to provide transportation; occupational/physical therapy; speech, vision, and transition services to students; and Response to Instruction and Intervention and schoolwide positive behavior support training for instructional staff and paraeducators. As a result of these IDEA funds, officials reported that they were able to reduce dropout rates from 14 percent in the 2007-2008 school year to two percent in the 2009-2010 school year and continue to provide a low student-teacher ratio. District officials reported that their Recovery Act IDEA award activities were fully completed for the 2009-2010 school year, and they plan to continue these activities even after the Recovery Act funds expire. Elko County School District Elko, NV 89803 Award amount: $1,402,931 Elko County School District reported that it used its Recovery Act IDEA award to assist in maintaining innovative programs that were in jeopardy of being eliminated. Funds were also used to incorporate new strategies and retain jobs. These funds supported approximately 1,500 students throughout the 22 schools in the district. Specifically, the funds were used for 25 percent of each of four RISE (a student retention and teacher mentor program) instructional coaches’ salaries; one RTI (Response to Intervention) coordinator, 25 percent of the salary of one special teacher who works with the administration of the Positive Behavior Support model across the district, and one teacher who provides support to teachers working with students with autism. In addition, a significant amount of professional development was offered, and SmartBoards, SmartResponse systems, audio enhancement technology, and other assistive technology were infused into the classrooms. As a result of these funds, officials reported that the district was able to provide additional instructional materials and resources for teachers, maximizing the impact on children directly as well as saving jobs. They also said that these funds resulted in efforts to positively affect student achievement. Officials indicated that their Recovery Act IDEA award activities were 50 percent or more completed. Florence City Schools Florence, AL 35630 Award amount: $1,010,802 Florence City Schools reported that it used its Recovery Act IDEA award to provide instruction and support to at least 724 special needs children in eight schools. Specifically, the funds were used to retain or hire staff and purchase instructional software for Title I schools. As a result of these funds, officials reported that the district was able to save at least six instructional and clerical positions. They indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Galesville-Ettrick-Trempealeau School District Galesville, WI 54630 Award amount: $284,286 Galesville-Ettrick-Trempealeau School District reported that it used its Recovery Act IDEA award to improve literacy scores in grades K-8 for all students, including special education students. These funds supported four schools with a total student population of 950 students, 120 of whom are special education students. Specifically, the funds were used to add a middle school literacy program called Read 180, which includes books, software, and computers. Additionally, the funds were also used to hire a literacy coach for elementary schools. As a result of these IDEA funds, officials reported that the district was able to increase reading levels and help teachers identify students who struggle in reading and develop strategies to improve reading. They indicated that their Recovery Act IDEA award activities were more than 50 percent completed. Glasgow K-12 Schools Glasgow, MT 59230 Award amount: $219,619 Glasgow K-12 Schools reported that it used its Recovery Act IDEA award to establish new learning centers to help at-risk students before and after school. These funds targeted 25 students with special needs or who are at risk in three schools. Specifically, the funds were used to hire three paraprofessionals to assist in these learning centers. As a result of these funds, officials reported that the district was able to increase the level of achievement, especially in the area of communication arts. They indicated that their Recovery Act IDEA activities were less than 50 percent completed. Greenville County Schools Greenville, SC 29602 Award amount: $8,466,248 Greenville County Schools reported that it used its Recovery Act IDEA award to maintain the same level of Special Education Service delivery and support for Special Education students within the School District of Greenville County. These funds supported all preschool, elementary, middle, and high schools, as well as a number of special centers in the district (98 locations total) and served 10,251 students as of December 1, 2009. Specifically, the funds were used to retain personnel, as well as provide instructional and contract services, and purchase instructional materials and equipment. They also provided in-county travel mileage for staff members. As a result of these IDEA funds, officials reported that the district was able to save approximately 100 jobs, many of which were classroom positions. They also said that these funds resulted in maintaining classroom sizes to prevent compromising Special Education Services. Officials indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Harmony Science Academy (Waco) Waco, TX 77099 Award amount: $77,766 Harmony Science Academy (Waco) reported that it used its Recovery Act IDEA award to contract for services to provide professional development and educational materials for the special education teacher. These funds supported approximately 10 special education students being served at this school. Specifically, the funds were used to retain one special education teacher, provide new instructional materials, and provide professional development to the teachers. As a result of these funds, officials reported that the school was able to improve instruction for students. School officials indicated that their Recovery Act IDEA award activities were more than 50 percent completed. Henry Johnson Charter School Albany, NY 12206 Award amount: $54,628 Henry Johnson Charter School reported that it used its Recovery Act IDEA award to add staff for Academic Intervention Services (AIS) math intervention. These funds targeted 20 to 25 students served daily by an AIS teacher. Specifically, the funds were used to hire an AIS math teacher to provide math intervention for students with special needs as well as those students who are struggling with math learning. As a result of these IDEA funds, officials reported that the school was able to improve math achievement and scores on standardized tests. They indicated that their Recovery Act IDEA award activities were fully completed. Houston Independent School District Houston, TX 77092 Award amount: $42,407,819 Houston Independent School District reported that it used its Recovery Act IDEA award to retain and hire staff, provide professional development, purchase instructional materials, and provide social and emotional services. These funds served 297 schools and 200,345 students, including 16,503 IDEA students in grades K through 12 and 1,342 IDEA students in preschool. Specifically, the funds were used to restructure the school day and class size, support new professional development programs, provide resources to establish and support differentiated instructional programs and online learning, provide social and emotional support activities, and provide academic reinforcement. As a result of these funds, officials reported that the district was able to improve scores on standardized tests and increase graduation rates. They indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Hunterdon Central Regional High School Flemington, NJ 08822 Award amount: $625,920 Hunterdon Central Regional High School reported that it used its Recovery Act IDEA award to improve the district's self-contained programs in life skills education and the behavioral disabilities program and to improve core content instruction in special education academic settings. These funds supported approximately 500 students with special needs in the district’s single school. Specifically, the funds were used to hire consultants to train staff about behavioral interventions in the classroom and on using new computer-assisted materials that remediate writing, reading, and mathematics weaknesses. New materials were purchased to improve the depth of the curriculum offered in special education classrooms and that focus on the remediation of writing, reading, and math skills. In addition, personal computing devices will be purchased for special education students to assist with coursework completion. As a result of these funds, officials reported that the district was able to maintain programs for students with multiple disabilities and behaviorally disabled students. They also said that they anticipate improved test results on standardized state testing. They indicated that their Recovery Act IDEA award activities were 50 percent or more completed. Lafayette School Corporation Lafayette, IN 47904 Award amount: $5,099,284 Lafayette School Corporation reported that it used its Recovery Act IDEA award to provide additional educational services for students with special needs and students with academic deficiencies. These funds have allowed increased educational services to 1550 IDEA students within the 11 schools in the school corporation. Specifically, the funds were used to hire additional staff to work with special needs students and students with academic needs. As a result of these IDEA funds, officials reported that Lafayette School Corporation was able to retain or hire staff for over 130 instructional positions to work with IDEA students. They also said that these funds resulted in the preservation of programs and maintenance of current student-teacher ratios. Officials indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Metropolitan School District of Decatur Township Indianapolis, IN 46221 Award amount: $764,847 Metropolitan School District of Decatur Township reported that it used its Recovery Act IDEA award to continue and expand IDEA reform efforts in the district by providing professional development for special education teachers. These funds supported the retention of nine teachers who function as instructional coaches, benefiting all students and teachers in the district. These instructional coaches concentrate half their time supporting professional development for staff who work with IDEA students, and half their time providing interventions for IDEA students. As a result of these funds, officials reported that in response to the increased focus on instructional strategies and smaller learning communities, they expect that IDEA students in all grades will have strong gains in standardized testing in areas where improvement was stagnant last year. In addition, officials report that they expect their graduation rate to continue to improve to at least 80 percent in the near future.. Officials indicated that their Recovery Act IDEA award activities were more than 50 percent completed. Detroit Midtown Academy Detroit, MI 48201 Award amount: $79,647 Detroit Midtown Academy reported that it used its Recovery Act IDEA award to retain and improve the capacity of special education programming. These funds supported one school with approximately 52 students with special needs. Specifically, the funds were used to hire an additional full-time teacher, retain a part-time aide, purchase computer equipment for one special education lab, purchase additional instructional supplies, and purchase adaptive technology. As a result of these funds, officials reported that the school was able to maintain the current student- teacher ratio and improve scores on standardized tests because of greater use of instructional technology and new instructional materials. They indicated that their Recovery Act IDEA award activities were 50 percent or more completed. Mattoon Community Unit School District #2 Mattoon, IL 61938 Award amount: $805,786 Mattoon Community Unit School District #2 reported that it used its Recovery Act IDEA award to implement a vocational program for IDEA students in high school and retain staff who work with IDEA students. These funds benefited all students in the district, which serves about 3,300 students, including approximately 700 IDEA students. Specifically, the funds were used to retain and hire staff who work with IDEA students, as well as for professional development of IDEA staff, and the purchase of some new equipment for IDEA students. As a result of these funds, officials reported that the district increased graduation rates among IDEA students. They indicated that their Recovery Act IDEA award activities were more than 50 percent completed. Menifee Union Elementary Menifee, CA 92584 Award amount: $3,040,489 Menifee Union Elementary reported that it used its Recovery Act IDEA award to integrate more special education pupils into the regular curriculum. These funds supported 11 schools and 811 pupils. Specifically, the funds were used to retain staff and provide professional development for classroom management and instructional delivery to pupils. As a result of the IDEA funds, officials reported that the district was able to save 50 positions and improve learning opportunities for students. They indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Mesa Arts Academy Mesa, AZ 85210 Award amount: $36,983 Mesa Arts Academy reported that it used its Recovery Act IDEA award for staff salaries and to purchase supplies and computer equipment to maintain or improve the quality of special education services. These funds supported one school serving approximately 230 children, including 15 special education students. Specifically, the funds were used to increase the instructional hours of the speech and special education teachers, as well as purchase supplies, assistive technology, and computer equipment. As a result of these IDEA funds, officials reported that school increased special education students’ access to resources and instruction. They indicated that their Recovery Act IDEA award activities were more than 50 percent completed. Omaha Public Schools Omaha, NE 68131 Award amount: $14,300,464 Omaha Public Schools reported that it used its Recovery Act IDEA award to expand early-childhood services, expand the district's data systems, increase teacher effectiveness through professional development, and undertake dropout prevention efforts. IDEA funds were also used for assistive technology and summer school expansion programs for students with special needs. These funds covered 79 schools and seven alternative programs, which serve 49,079 students. Specifically, the funds were used to implement an online assessment system to support classroom instruction, provide professional development for instructional staff, increase student support to prevent students from dropping out of school, implement an online system for Individualized Education Programs, and expand early childhood programming. As a result of these funds, officials reported that the district was able to improve scores on state reading and mathematics tests, decrease the dropout rate, increase the graduation rate, increase the number of high-need children in prekindergarten programs, and create or retain 298 jobs. They also said that these funds resulted in more learning opportunities for the students by expanding the school day and offering summer school and tutoring. Officials indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Oxnard Elementary Oxnard, CA 93030 Award amount: $2,773,322 Oxnard Elementary reported that it used its Recovery Act IDEA award to start up a cochlear implant classroom in the district, which required special acoustics, and furniture and included the hiring and training of a teacher. These funds served three students in the district and approximately six more from neighboring districts. In particular, the funds were used to create a classroom, train staff, and buy supplies. As a result of these IDEA funds, officials reported that the district was able to provide services locally at a much reduced cost rather than sending students to an institute in Los Angeles. They also said that the district can now serve students in their own district as well as students in surrounding districts. The cochlear implant classroom will open in the 2010-2011 school year. Officials indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Pasadena Independent School District Pasadena, TX 77502 Award amount: $10,757,671 Pasadena Independent School District reported that it used its Recovery Act IDEA award to improve and enhance programming for students with disabilities. These funds affected all schools and all special needs students (approximately 3,800) in the district. Specifically, the funds were used to retain special education staff, add support staff such as diagnosticians and transition teachers, implement data management systems for special education programs, and provide professional development for staff who work with special needs students in the area of autism, inclusion/co-teach, and other specialized programming. As a result of these IDEA funds, officials reported that the district was able to maintain 26 support positions to improve instructional practices resulting in improved student outcomes, and improve data integrity to meet compliance requirements. They also said that these funds resulted in improved functioning capability and skills of campus and district staff in order to build capacity to sustain improvement. Officials indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Pima Accommodation District Tucson, AZ 85701 Award amount: $16,917 Pima Accommodation District reported that it used its Recovery Act IDEA award to provide related special education services to the new 18–to 21- year-old special education inmates at the Pima County Adult Detention Facility so that they can acquire a General Equivalency Diploma or work toward high school completion. These funds targeted 32 students in one school. Specifically, the funds were used to purchase direct and support services, buy instructional material, provide special education staff development, and provide inmates with transitional support. As a result of these funds, officials reported that the district was able to provide 4 hours of daily instruction in the adult special education classroom to inmates at the jail facility. They indicated that their Recovery Act IDEA award activities were fully completed. Pinellas County Schools Largo, FL 33770 Award amount: $25,539,310 Pinellas County Schools reported that it used its Recovery Act IDEA award to enhance services to students with disabilities by providing Response to Intervention/Early Intervening Services (EIS) and by providing services to private school students with disabilities. These funds supported all 122 Pinellas schools. Specifically, the funds were used to hire instructional and content coaches for RTI/EIS and social workers for counseling services for students with disabilities as well as to hire Exceptional Student Education teachers to serve private school students with disabilities. Funds were also used to provide teacher training and to provide instructional materials and technology for students with disabilities, students requiring RTI/EIS, and private school students with disabilities. As a result of these IDEA funds, officials reported that the district was able to improve achievement for students with disabilities and students requiring RTI/EIS. Officials indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Puritas Community School Cleveland, OH 44135 Award amount: $41,797 Puritas Community School reported that it used its Recovery Act IDEA award to provide ongoing high-quality special education services to students who need assistance in their educational processes and experiences. These funds covered 13 special needs students out of a total student population of 196. Specifically, the funds were used to retain staff. As a result of these IDEA funds, officials reported that the school was able to retain 0.25 full-time-equivalent staff to maintain its special education program for all students. They indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Sacramento City Unified Sacramento, CA 95824 Award amount: $10,069,615 Sacramento City Unified reported that it used its Recovery Act IDEA award to retain instructional staff, provide professional development for special education staff and upgrade facilities to include an occupational therapy clinic at a school that serves a large number of special education students.. These funds served approximately 2,000 students with special needs in the district. Specifically, the funds were used to retain special education staff, provide professional development for instructional staff, and to make school facility upgrades so that students could receive occupational therapy services while at school rather than being bussed to another location. As a result of these IDEA funds, officials reported that the district was able to increase academic proficiency in California Standards Tests (CST) and retain approximately 10 instructional positions. In addition, the facility update allowed students with special needs to receive services during their school day, thereby reducing disruptions to their education. They indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Salt Lake District Salt Lake City, UT 84111 Award amount: $5,757,525 Salt Lake City School District officials reported that it used its Recovery Act IDEA award for five main purposes. First, they used the funds to develop and expand their capacity to collect and use data for student achievement and progress monitoring in 28 elementary schools and 5 middle schools as a way to improve teaching and learning. To that end, they retained a special education supervisor to oversee implementation of ARRA-funded activities; hired a part-time data specialist to support data collection, analysis and reporting requirements; contracted a parent liaison to help parents understand the use of data for decision making; and purchased laptops and personal digital assistants for approximately 56 itinerant support staff (e.g., occupational therapists, school psychologists) who are responsible for monitoring student progress. Second, officials reported that they used the funds to obtain and upgrade assistive technology devices for approximately 375 students in special classes at 22 elementary and three middle schools. Specifically, they purchased computers, monitors, and technology assistance for the academic and behavior support classrooms in the elementary schools and purchased computers, applications, and site licenses for reading, math, and science instruction in the middle schools. In addition, officials provided training for approximately 140 special and regular education teachers in using the technology to improve instruction and monitor student progress. Third, Salt Lake City School District officials used the funds to hire high school transition and compliance coaches at the district’s four high schools to work with employers in the community, postsecondary schools, and 44 high school special education teachers to develop appropriate transitions for approximately 750 high school and post-high special education students. For all high school special education teachers, the district used the funds for professional development on transition issues. The district also has plans to hire a certified teacher to support students in acquiring adult living skills and participating in adult basic education classes; hire eight job coaches to support students in integrated job settings, and contract with the University of Utah special education department for job coach training and monitoring of student job training outcomes. Fourth, officials reported that they used the funds to provide intensive districtwide professional development for 75 special education and regular education teachers at 28 elementary schools and 5 middle schools that focus on scaling-up evidence-based, schoolwide strategies to improve behavioral outcomes, interventions and supports for students with disabilities. Furthermore, the district hired 2.5 licensed clinical social workers for the middle schools and 3 behavior staff to support schools with intervention plans for students, implement least restrictive behavioral interventions, and train staff in behavior de-escalation. Finally, Salt Lake City School District used its Recovery Act IDEA award to improve language arts, math, and science instruction and student outcomes through providing intensive district-wide professional development for 130 special education and regular education teachers in evidence-based, schoolwide strategies to improve outcomes for students with disabilities. To assist teachers, district officials hired 6 special education interns to support selected elementary schools in early intervening services in reading and math; 2.5 elementary special education academic coaches to improve student achievement in elementary academic support and behavior support classes; two elementary and middle school special education academic coaches to improve student achievement in elementary and middle school functional academic classes; 3.5 speech language pathologists to support elementary schools in literacy acquisition programming; and a .5 autism specialist and 2 autism coaches to support students with high functioning autism. Each school also received supplemental and intensive interventions curricula to support students with disabilities. In addition, the district plans to purchase research-based curriculum for language arts, math, and science for middle school and high school special education classrooms and professional development on effective instruction for special and general education teachers. Overall, Salt Lake School District officials reported that through the use of the Recovery Act IDEA funds, they have created or retained a total of 38 jobs and obtained technology and software for special education staff, classrooms, and students to use for student record keeping, teaching and learning. They expect to involve other stakeholders (e.g., parents, universities) in identifying appropriate outcomes for students with disabilities; increase the graduation rate and reduce the dropout rate of students with disabilities; prepare students with disabilities for adult- oriented outcomes, increase the capacity of special education and general education teachers to teach and accommodate (both academically and behaviorally) students with disabilities; design more efficient systems and processes to improve compliance and to meet the state performance plan indicators; and increase grade-level achievement of students with disabilities in language arts and math. Officials indicated that their Recovery Act IDEA award activities were more than 50 percent completed. San Antonio Independent School District San Antonio, TX 78210 Award amount: $2,144,674 San Antonio Independent School District reported that it used its Recovery Act IDEA award to enhance each special education program to support activities that will improve results for students. These funds served over 5,900 students at over 90 campuses across the district. Specifically, the funds were used on a full range of activities including professional development, computer software packages for instructional programs and student data management, upgrade of technology equipment in classrooms serving special education programs, purchase of assistive technology, and parent involvement activities. As a result of these funds, officials reported that the district was able to improve student achievement and performance resulting in reduced dropouts, higher graduation rates, and improved postsecondary student outcomes, as well as retaining teaching and other instructional support staff positions. They indicated that their Recovery Act IDEA award activities were less than 50 percent completed. San Dieguito Union High Encinitas, CA 92024 Award amount: $1,431,581 San Dieguito Union High School District officials reported that it used its Recovery Act IDEA award for four main activities. First, officials told us that they used the funds to train 25 staff in writing transition plans for students who have individualized education programs (IEP) and working with autistic students. Therefore, officials could ensure staff members’ compliance with writing transition plans and decrease the use of nonpublic agencies for students with autism. Second, officials reported that they used the funds for special education students at eight schools by assisting them with making up course credits and implementing a literacy program called Read 180. Officials told us that they were able to decrease the number of special education students who are credit deficit in the twelfth grade and improve their reading success. Third, the officials told us that they used $577,456 of the funds to reduce contributions from the district’s general fund. They could therefore pay for nonpublic schools and agencies that provide services for students with special needs. Last, officials reported that they replaced seven older buses that serve 63 students in need of transportation per their IEP. Specifically, the buses that were replaced were 1988-1995 models that had between 250,000-399,000 miles. The buses went into service in May 2010 and have allowed San Dieguito Union High to increase the reliability of its transportation. Officials indicated that their Recovery Act IDEA award activities were more than 50 percent completed. San Juan Unified Carmichael, CA 95609 Award amount: $9,330,839 San Juan Unified reported that it used its Recovery Act IDEA award to focus on instruction and best practices for all administrators and teachers, from pre-K to grade 12. These funds affected 1,000 IDEA students in 50 school sites, and were used to hire two reading coaches and two behavior specialists. Additionally, 100 teachers, 30 psychologists, and 20 administrators participated in intensive behavior training. Specifically, the funds were used to implement an intensive reading intervention for IDEA students, train staff to build positive behavior interventions, replace and upgrade older computers for 11 psychologists and five other special education managers, and establish a preschool special education class equipped with preschool furniture and playground equipment for students with disabilities. As a result of these IDEA funds, officials reported that the district was able to develop reading skills for IDEA students, implement positive behavior interventions in schools and dramatically reduced school suspensions in some schools, and improve preschool programs. They indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Scholarts Preparatory School Columbus OH 43236 Award amount: $72,409 Scholarts Preparatory School reported that it used its Recovery Act IDEA funds for professional development as well as technology purchases. The funds supported the professional development in data-driven instruction and assessment planning of two school administrators and 15 instructional staff, including special education teachers. Overall, 180 students in the school, and specifically 110 special education students were affected by the funds. Specifically, by paying salaries, the school used its Recovery Act IDEA funds for special education support services such as tutoring, psychologists, social workers, and transportation. The funds were also used to purchase SmartBoards and associated technology for schoolwide use that the school could not afford in the past. As a result of these funds, officials reported that the school was able to pay approximately five to six teacher salaries, increase its professional development program, and enhance classroom instruction through the use of technology. School officials said they also hoped to increase standardized test scores. They indicated that their Recovery Act IDEA award activities were completed. Sea Girt Borough Public Schools Sea Girt, NJ 08750 Award amount: $43,835 Sea Girt Borough Public Schools reported that it used its Recovery Act IDEA award to provide all special education teachers with a full range of multisensory approaches to improve the teaching of reading. It also offset the unbudgeted cost due to specific individualized education program (IEP) demands. These funds supported approximately 10 to15 percent of the single school district’s 180 students. Specifically, the funds were used to provide professional development (i.e., Wilson Training and instructional materials) and make capital improvement to the classroom through installation of infrared sound field systems. As a result of these IDEA funds, officials reported that the district was able to increase classified students' ability to perform on all academic assessments (i.e., greater reading proficiency) and continue with regular established programs to the benefit of all students. Officials indicated that their Recovery Act IDEA award activities were more than 50 percent completed. South Pointe Public Charter Middle School Phoenix, AZ 85020 Award amount: $33,948 South Pointe Public Charter Middle School reported that it used its Recovery Act IDEA award to improve scores on standardized tests, increase special education students’ access and understanding of the general education curriculum, and enhance supports and instructional modifications for special education students in the inclusive setting. These funds served over 35 special education students. Specifically, the funds were used to hire a part-time special education coordinator to enhance supports and instructional modifications, purchase instructional materials, and provide related services for special education students such as speech, physical therapy, psychological, hearing, and vision services. As a result of these funds, officials reported that the school was able to improve standardized test scores, improve dropout and graduation rates, and increase understanding of and accessibility to the general education curriculum. They indicated that their Recovery Act IDEA award activities were more than 50 percent completed. Southwest Schools Houston, TX 77057 Award amount: $422,874 Southwest Schools reported that it used its Recovery Act IDEA award to increase instructional staff and provide additional related services. These funds served 285 IDEA students across five campuses. Specifically, the funds were used to increase instructional staff by hiring one educational diagnostician and one licensed specialist in school psychology; provide additional professional development for instructional staff; purchase supplemental instructional material; provide additional related services such as speech therapy occupational therapy, and physical therapy for students with disabilities; and provide one-on-one aides for autistic students. As a result of these IDEA funds, officials reported that the district was able to improve IDEA students’ performance in the classroom and on standardized tests and increase graduation rates for IDEA students. They indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Special School District Baton Rouge, LA 70802 Award amount: $125,077 Special School District reported that it plans to use its Recovery Act IDEA award to focus on improving student performance. These funds will serve 550 special education students in 13 programs. Specifically, the funds will be used to purchase research-based, technology-rich instructional programs focused on literacy and numeracy, and provide professional development teachers with instructional materials and strategies. As a result of these funds, officials reported that they expect improved academic achievement, especially in literacy and numeracy areas, enhanced student engagement, and teacher growth. Officials indicated that their Recovery Act IDEA award activities have not started. Telfair County School District McRae, GA 31055 Award amount: $334,766 Telfair County School District reported that it used its Recovery Act IDEA award to maintain a low student-teacher ratio; increase inclusion as a model for special education students; use implementation specialists in reading/English language arts, math, and technology to support best practices with teaching staff; train staff in direct instruction; and increase the use of technology in the classrooms. These funds covered all 1,800 students in the district and over 200 teachers, including 22 special education teachers and 5 Pre-K teachers. Specifically, the funds were used to hire implementation specialists in reading/English language arts, math, and technology for job-embedded training and staff development in grades K-8, as well as to initiate a specialized program to meet the needs of special learners and decrease achievement gaps. As a result of these IDEA funds, officials reported that the district was able to encourage implementation of standard-based instruction using best practices in all schools. They also said that these funds resulted in maintenance of a low student-teacher ratio, enabling the district to better support student learning, which they expect will increase the academic performance of struggling students on standardized tests. Officials indicated that their Recovery Act IDEA award activities were more than 50 percent completed. Tennessee School for the Deaf Knoxville, TN 37920 Award amount: $37,051 Tennessee School for the Deaf, a residential facility for deaf and hard-of- hearing students, reported that it used its Recovery Act IDEA award to purchase classroom supplies and two-way radios to be used by principals and other administrators in case of an emergency. These funds supported each of its three schools (an elementary, middle, and high school) and approximately 180 students from across the state of Tennessee. Specifically, the funds were used to purchase instructional materials and emergency radio equipment. As a result of funds, officials reported that the school was able to enhance students' learning through instructional materials. Additionally, they said that two-way radios will be used in emergency situations to relay information quickly to the school departments. Officials indicated that their Recovery Act IDEA award activities were more than 50 percent completed. The Max Charter School Houma, LA 70364 Award amount: $22,423 The Max Charter School reported that it used its Recovery Act IDEA award to improve academic progress and standardized tests scores for students with disabilities and at-risk students. These funds covered students districtwide, including approximately 65 students of the 106 enrolled at Max Charter School (61 percent of the LEA's membership). Specifically, the funds were used to hire three part-time paraprofessionals to provide small- group instruction/remediation to at-risk and learning disabled students and to hire two instructional staff to provide after school remediation and tutoring to at-risk and learning disabled students. As a result of these funds, officials reported that the school was able to create five part-time instructional positions. Additionally, they said that the activities were expected to increase academic progress and proficiency and scores on standardized tests in English language arts and math for at-risk students and students with disabilities. School officials indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Tippecanoe School Corporation Lafayette, IN 47909 Award amount: $2,663,788 Tippecanoe School Corporation reported that it used its Recovery Act IDEA award to hire and pay staff and provide additional educational services for students with special needs and academic deficiencies. By using these funds to retain or hire over 130 instructional positions who work with IDEA students, the funds have allowed increased educational services to 1,563 IDEA students in the school corporation. Specifically, as part of the Greater Lafayette Area Special Services (GLASS) cooperative, Tippecanoe School Corporation worked with Lafayette School Corporation and West Lafayette School Corporation to hire additional staff to work with special needs students and students with academic needs. As a result of these funds, officials reported that the school corporation and the special education cooperative to were able to preserve programs and maintain current student-teacher ratios. Officials indicated that their Recovery Act IDEA award activities were more than 50 percent completed. Tucson Unified District Tucson, AZ 85719 Award amount: $4,938,806 Tucson Unified District reported that it used its Recovery Act IDEA award to hire and retain staff to provide services to students with disabilities, order assistive technology, and purchase updated software for students with academic difficulties. These funds supported over 8,000 students with special needs who receive services in more than 100 schools in the district. Specifically, the funds were used to hire 15 new teachers and over 30 new paraprofessionals to work with students on a one-to-one basis. Other staff were contracted to provide therapy services and translate documents into Spanish. Additionally, devices for better movement, sight, and hearing were used to meet the adaptive needs of students. Updated software was ordered for students with academic difficulties and for better case management of these students. As a result of these IDEA funds, officials reported that they expect the district to improve academic performance and help students gain access to the general curriculum. They indicated that their Recovery Act IDEA award activities were more than half completed. Twinsburg City Schools Twinsburg, OH 44087 Award amount: $798,028 Twinsburg City Schools reported that it used its Recovery Act IDEA award to create staff positions and provide staff development. These funds were used in all five schools in the district. These schools together serve approximately 400 special education students. The district also used the funds to add approximately five staff positions. Specifically, the funds were used for staff development, to add two instructional staff and three instructional assistants, and to purchase technology for special education classrooms, such as SmartBoards and projectors. Additional items were purchased for the district to provide an after-school game club for students with special needs to promote peer interaction. As a result of these IDEA funds, officials reported that the district improved student achievement by reducing class size and the caseload in the special education program to provide students with special needs the same extracurricular opportunities as their peers. District officials reported that their Recovery Act IDEA award activities were less than 50 percent completed. Waconia Public School District Waconia, MN 55387 Award amount: $696,390 Waconia Public School District reported that it used its Recovery Act IDEA award to maintain staff and its student-teacher ratio, especially in the elementary grade levels. These funds targeted two schools and approximately 178 students with special needs. Specifically, the funds were used to retain staff. As a result of these funds, officials reported that the district was able to save four instructional positions and maintain its current student-teacher ratio. They indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Wareham Public Schools Wareham, MA 02571 Award amount: $443,782 Wareham Public Schools reported that it used its Recovery Act IDEA award to improve prekindergarten and kindergarten services, decrease class size, retain staff, provide professional development, purchase instructional materials and software, and implement a new program. These funds supported approximately 600 students with special needs, in addition to approximately 345 regular education students in inclusion classrooms throughout the district’s eight schools. Specifically, the funds were used to hire an elementary school special education teacher, retain special education teachers at the middle and high school levels, provide professional development for staff who work with students with special needs, provide seed money for a new alternative placement program for behaviorally challenged special education students, purchase instructional materials for students with special needs and an IEP software program (E- SPED). The funds were also used to decrease class size at the elementary level in inclusion programs. As a result of these funds, officials reported that the district was able to reduce district costs by implementing the new program and improve the network for data retrieval and collection. They indicated that their Recovery Act IDEA award activities were less than 50 percent completed. Wayzata Public School District Wayzata, MN 55391 Award amount: $2,301,098 Wayzata Public School District reported that it used its Recovery Act IDEA award to provide services to students with disabilities and to prevent the need for future services by concentrating on early identification and intervention. These funds targeted 11 sites with a total of 1,100 students benefiting directly from the funds. Specifically, the funds were used to continue to dedicate 2 percent of the district’s $100 million budget for staff development activities on a districtwide basis. As a result of these funds, officials reported that the district was able to continue staff development through education and integration, which allowed the teachers to properly identify strategies to assist students with special needs. They also said that these funds resulted in enhanced student learning. Officials indicated that their Recovery Act IDEA award activities were 50 percent or more completed. West Salem School District West Salem, WI 54669 Award amount: $367,098 West Salem School District reported that it used its Recovery Act IDEA award to hire a staff person to help teachers improve lesson plans for special education students, purchase a software program for special education students, purchase textbooks for special education students; and purchase equipment for students with physical disabilities. The district has additional plans to remodel a classroom to improve accessibility for students with physical disabilities. These funds served 186 special education students across the district. Specifically, the funds were used to hire an experienced special education teacher to teach general education high school teachers how to modify tests and assignments for special education students and how to address modifications and accommodations for students with IEPs; purchase a software program for special education students that enables them to follow a modified version of the general education curriculum, purchase textbooks for students with learning disabilities who read at lower reading levels, and purchase two Hoyer Lifts for students with physical disabilities in middle school. The district also has plans to remodel a classroom to increase accessibility for students with physical disabilities. As a result of these IDEA funds, officials reported that they expect reading scores to increase, the quality of instruction for students with disabilities to improve (especially in inclusion classes), and the school to have a more accessible classroom for students with physical disabilities. Officials indicated that their Recovery Act IDEA award activities were more than 50 percent completed. The Recovery Act provides $10 billion to help local educational agencies educate disadvantaged youth by making additional funds available beyond those regularly allocated through Title I, Part A of the Elementary and Secondary Education Act of 1965, as amended (ESEA). These additional funds are to be distributed through states to LEAs using existing federal funding formulas, which target funds based on such factors as high concentrations of students from families living in poverty. In using the funds, LEAs are required to comply with current statutory and regulatory requirements and must obligate 85 percent of the funds by September 30, 2010. Education is advising LEAs to use the funds in ways that will build the agencies’ long-term capacity to serve disadvantaged youth, such as through providing professional development to teachers. Given that few descriptions fully met our transparency criteria, we administered a web-based survey to school district superintendents in the 50 states and the District of Columbia to determine how they are using Recovery Act funds. We conducted our survey between March and April 2010, with a 78 percent final weighted response rate. We selected a stratified random sample of 575 LEAs from the population of 16,065 LEAs included in our sample frame of data obtained from the Common Core of Data in 2007-2008. Of this sample, we randomly selected 150 LEAs (50 for each program) to gather illustrative information on how they used their Recovery Act funds. See appendix VII for more information on how we designed our survey. What follows are summaries of how these LEAs described their use of Recovery Act Title I, Part A funds, based on their survey responses as well as information we collected through follow-up communications. Alamo Heights Independent School District San Antonio, TX 78209 Award amount: $181,506 Alamo Heights Independent School District reported that it used its Recovery Act Title I award to retain a teaching position and increase the effectiveness of its teachers. These funds supported one teacher who serves at-risk children in reading and math at and professional development for several teachers in two other schools, affecting about 100 students all together. The professional development was in Data Director, a software program that allows data disaggregation to better inform curricular and instructional decisions. As a result of these funds, officials reported that the district was able to save an instructional position and improve test scores of at-risk students. They indicated that their Recovery Act Title I award activities were less than 50 percent completed. Arizona Call-a-Teen Youth Resources, Inc. Phoenix, AZ 85003 Award amount: $37,375 Arizona Call-a-Teen Youth Resources, Inc., reported that it used its Recovery Act Title I award to increase student achievement in math and reading. These funds targeted 125 students at one school. Specifically, the funds were used to retain staff, purchase instructional materials, and provide professional development. As a result of these Title I funds, officials reported that they were able to save two instructor positions. They also said these funds resulted in improved scores on standardized tests. Officials reported that their Recovery Act Title I activities were 50 percent or more completed. Arlington Independent School District Arlington, TX 76013 Award amount: $11,345,205 Arlington Independent School District reported that it used its Recovery Act Title I award to provide Title I resources to students who attend campuses that were eligible, but not previously served under Title I; improve instructional practices; provide supplemental resources for students; and enhance the family involvement program. These funds supported approximately 42 Title I campuses serving about 32,000 students and families who live in Title I attendance zones. Specifically, the funds were used to hire additional curriculum specialists to work directly with teachers, a social worker to provide support for families, a Spanish language translator to meet the oral and written needs of families, and a fathers’ outreach liaison to work specifically to get more fathers involved with their children's education. The funds were also used to provide professional development for instructional staff and coaches and to purchase instructional materials and technology for classrooms. As a result of Title I funds, officials reported that the district was able to improve the achievement of students, improve classroom teaching using 21st century technology and materials, and better meet the needs of families so that the children may improve their academic achievement and attendance. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Austin Independent School District Austin, TX 78703 Award amount: $22,974,560 Austin Independent School District reported that it used its Recovery Act Title I award to support English language learners’ academic achievement and math and science achievement in elementary and middle schools. The district also used its funds to support credit recovery and dropout prevention, and increase graduation rates in addition to intervention and supports for persistently low-performing schools. These funds supported over 50,000 students in 52 elementary schools, 10 middle schools, and 5 high schools. Specifically, the funds were used to hire staff and purchase supplies and materials, including computer equipment, software, and site licenses. Instructional materials and curriculum were purchased to support math and science education at the elementary and middle school levels. Science Technology Engineering and Mathematics (STEM) materials and equipment were purchased for use at the high school level. The funds were also used to provide professional development, additional intervention to struggling students, and support for parent involvement and college and career readiness. As a result of these Title I funds, official reported that the district was able to continue full-day prekindergarten, improve scores on standardized tests, improve rigor and uniformity of math and science instruction, turnaround struggling schools, and improve academic performance for English language learners. They also said that these funds resulted in the creation of 16 positions and the retention of approximately 26 positions. Officials indicated that their Recovery Act Title I award activities were less than 50 percent completed. Beal City Public Schools Mt. Pleasant, MI 48858 Award amount: $28,009 Beal City Public Schools reported that it used its Recovery Act Title I award to purchase technology that would assist a specific group of boys who were struggling in reading. These funds targeted 30 to 40 fifth grade boys in the district’s elementary school, who according to test scores, were lagging behind in reading. Specifically, the funds were used to purchase technology that included computers, a SmartBoard, projectors, and Kindles for intensive direct instruction in reading. As a result of these Title I funds, officials reported that the district was able to increase scores on standardized tests for the male students. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Bourbonnais School District 53 Bourbonnais, IL 60914 Award amount: $138,439 Bourbonnais School District 53 reported that it used its Recovery Act Title I award to expand the use of technology in early interventions with at-risk students. These funds supported 300 students at four schools. Specifically, the funds were used to hire extra staff for after school and summer school programs and provide cutting edge technology to support these programs. As a result of these Title I funds, officials reported that the district was able to decrease the number of elementary students who are currently academically at risk. They indicated that their Recovery act Title I activities were 50 percent or more completed. Callaway Public Schools Callaway, NE 68825 Award amount: $19,230 Callaway Public Schools reported that it used its Recovery Act Title I award to acquire technology to aid in computer-assisted instruction for Title I students. These funds supported approximately 40 students in the elementary school. Specifically, the funds were used to purchase a SmartBoard and a projector. As a result of these Title I funds, officials reported that the district was able to increase student test scores on classroom assessments, as well as standardized test scores. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Camden City Public Schools Camden, NJ 08102 Award amount: $6,397,060 Camden City Public Schools reported that it used its Recovery Act Title I award to provide professional development for instructional staff and purchase materials to implement a reading program. These funds covered all schools in the district, which includes 5 high schools, 5 middle schools, and 22 elementary schools that serve 12,068 students. Specifically, the funds were used to provide an intensive districtwide reading program, including professional development and an intense data component that allows teachers and administrators to track the students’ progress. The funds were also used to provide additional tutoring services for those students who are failing or most at risk of failing to meet the state’s academic achievement standards. As a result of these Title I funds, officials reported that they expected improved scores on standardized state tests. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Cedar Ridge School District Newark, AR 72562 Award amount: $160,979 Cedar Ridge School District reported that it used its Recovery Act Title I award to enhance the curriculum, ensure that high-quality instruction is being delivered in every classroom, ensure that curriculum frameworks are being taught at every grade level; find strengths and weaknesses in the curriculum, and assist and ensure that the curriculum is taught at the appropriate level. These funds supported approximately 852 students at two elementary schools and one junior/senior high school. Specifically, the funds were used to hire an assistant superintendent of curriculum and instruction and a resource officer. Professional development time was given to teachers to develop pacing guides, and Compass Learning software was purchased to provide assistance to teachers. As a result of these Title I funds, officials reported that they expect standardized test scores to increase. They also said that these funds helped teachers become more effective and the curriculum become more enriched Officials indicated that their Recovery Act Title I award activities were 50 percent or more completed. Colon Community School District Colon, MI 49040 Award amount: $140,013 Colon Community School District reported that it used its Recovery Act Title I award to implement a Title I Preschool and offer summer school for students not achieving at grade level. These funds supported 16 students in preschool and 40 students in summer school. Specifically, the funds were used to retain one staff member, hire one staff member, and purchase some instructional materials. As a result of Title I funds, officials reported that the district met the challenge of closing its socioeconomic gaps by providing preschool opportunities and by offering summer school. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Commerce City Schools Commerce, GA 30529 Award amount: $140,824 Commerce City Schools reported that it used its Recovery Act Title I award to retain personnel and provide professional development. These funds supported an elementary school and a primary school. Specifically, the funds were used to retain two teachers and one paraprofessional and use one academic coach for professional development related to science curriculum at the elementary school. As a result of these Title I funds, officials reported that the district was able to retain three staff positions and maintain programs. They also said that these funds allowed the district to pay for professional development. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Cotter School District Cotter, AR 72626 Award amount: $173,280 Cotter School District reported that it used its Recovery Act Title I award to lower the student-teacher ratio and enhance instructional effectiveness. These funds supported a student population of approximately 350 at Amanda Gist Elementary, a K-6 school. Specifically, the funds were used to provide professional development to teachers, retain a licensed teacher and a paraprofessional, and purchase instructional materials for literacy and mathematics and technology hardware and software for instructional use. As a result of these funds, officials reported that the district was able to save two instructional positions and provide additional instructional materials and current technology. They also said they anticipate these funds will result in increased student achievement, higher graduation rates, and greater college or technical school completion rates. Officials indicated that their Recovery Act Title I award activities were less than 50 percent completed. Des Moines Independent Community School District Des Moines, IA 50309 Award amount: $6,550,371 Des Moines Independent Community School District reported that it used its Recovery Act Title I award to increase the number of schools receiving Title I services, to initiate a School Improvement Leader model, and to provide instructional materials in reading and math. These funds increased Title I support in 7 schools (2 high schools, 1 middle school, 4 elementary schools) serving a total of 4,000 students, supported 6 School Improvement Leaders (3 at each of 2 middle schools) serving a total of 1,000 students, and targeted more than 60 schools districtwide serving over 30,000 students. Specifically, the funds were used to retain staff at schools previously designated as Title I, hire additional staff to increase the number of schools receiving Title I services, and purchase reading and math instructional materials. As a result of these Title I funds, officials reported that the district was able to retain 11 to 14 positions, maintain its current level of Title I services, and improve achievement in reading and math. They indicated that their Recovery Act Title I award activities were less than 50 percent completed El Paso Independent School District El Paso, TX 79998 Award amount: $28,173,486 El Paso Independent School District reported that it used its Recovery Act Title I award to attain high student achievement, provide a challenging learning environment, and graduate mentally, emotionally, and physically healthy students who are lifetime learners. These funds supported each of the 76 Title I schoolwide campuses serving 54,195 students. Specifically, the funds were used to retain and hire staff, provide professional development activities for instructional staff, integrate instructional technology in the classroom, and purchase other instructional materials. As a result of these Title I funds, officials said that the district expects to improve scores on standardized tests, decrease the number of schools in school improvement, and increase the number of students that graduate on time that are ready for college or the world of work. These funds also resulted in the retention of 50 instructional positions and maintained the current student-teacher ratio. District officials reported that Recovery Act Title I award activities were less than 50 percent completed. Escondido Union High School District Escondido, CA 92027 Award amount: $637,836 Escondido Union High School District reported that it used its Recovery Act Title I to purchase instructional equipment. These funds targeted three comprehensive school sites that serve over 7,700 students. Specifically, the funds were used to purchase equipment for an LCD projector installation project. Technology components that were added to the classrooms were computers to run software for the at-risk math and reading students. In addition, there was a technology component of the State Adopted Materials that required additional equipment for the teachers to use in classroom instruction. As a result of these Title I funds, officials reported that the district was able to upgrade its instructional technology. They indicated that their Recovery Act Title I award activities were less than 50 percent completed. Fairland Local School District Proctorville, OH 45669 Award amount: $380,588 Fairland Local School District reported that it used its Recovery Act Title I award to create instructional positions and will also use funds from the award to retain instructional positions and purchase computer equipment for two elementary schools. The district will also use funds from the award to provide a substitute teacher for intervention services. As a result of these Title I funds, officials reported that the district has been able to create two instructional positions and reduce class size. District officials reported that their Recovery Act Title I award activities were less than 50 percent completed. Goddard Public Schools USD 265 Goddard, KS 67052 Award amount: $203,973 Goddard Public Schools USD 265 reported that it used its Recovery Title I award to promote programs that help students acquire skills needed to succeed in life and provide services to students deficient in reading and math skills and foundational academic skills to all. These funds supported four elementary schools serving approximately 1,950 students. Specifically, the funds were used to retain staff. As a result of Title I funds, district officials said they were able to maintain the district’s student-teacher ratio of approximately 22 to 1 and save two teaching positions. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Green Woods Charter School Philadelphia, PA 19128 Award amount: $131,622 Green Woods Charter School reported that it used its Recovery Act Title I award to provide additional supports for at-risk students in reading and math. These funds supported a single charter school that serves approximately 220 students. Specifically, the funds were used to hire new staff, purchase instructional materials, and provide new professional development opportunities for instructional staff. Additionally, classroom libraries, communications systems for parents, and computers for classrooms were purchased. Substitutes were also provided so teachers could attend professional development, and a part-time reading specialist was hired. As a result of these Title I funds, officials reported that the school was able to improve test scores. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Greene County Tech School District Paragould, AR 72450 Award amount: $345,010 Greene County Tech School District reported that it used its Recovery Act Title I award to improve student achievement by updating technology and providing supplies. These funds supported the district’s five schools that serve approximately 3,600 students. Specifically, the funds were used to update technology and purchase supplies, which assisted with remediation efforts for state achievement tests. As a result of these Title I funds, officials reported that the district was able to improve student achievement on the state tests and improve graduation rates. They indicated that their Recovery Act award activities were less than 50 percent completed. Gurdon School District Gurdon, AR 71743 Award amount: $157,722 Gurdon School District reported that it used its Recovery Act Title I award to improve technology in classrooms and provide instruction for teachers. These funds targeted three schools and affected approximately 750 students. Specifically, the funds were used to retain one teacher, hire a classified instructional staff member and purchase 56 multimedia classroom sets. As a result of these Title I funds, officials reported that the district was able to maintain its student-teacher ratio and expects student scores to increase 15 percent. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Hillsborough County Public Schools Tampa, FL 33602 Award amount: $36,169,731 Hillsborough County Public Schools reported that it used its Recovery Act Title I award to provide professional development, early intervention activities, curriculum support for academic achievement, instructional technology, and career and college ready support. These funds supported 125 Title I schools serving approximately 80,000 students. Specifically, the funds were used to provide additional guidance services to high-need elementary schools, hire additional tutors to work with students at Level 1 and 2 on the state assessment, establish a robotics pilot at 24 schools to support math and science at STEM feeder schools, and hire additional reading coaches at high-poverty elementary schools. The funds were provided to participating private schools to support additional services to Title I eligible students. Title I funds were also used to provide additional performance pay at the district's highest-poverty schools in order to better recruit and retain instructional staff, provide professional development in content and pedagogical areas for teachers, upgrade instructional technology and hardware, establish a Parent Involvement Pilot in the district's urban core to better engage parents in the education of their children, and purchase instructional materials to support early childhood programs. As a result of these Title I funds, officials reported that the district was able to improve academic achievement for students on state- administered assessments. They indicated that their Recovery Act Title I activities were 50 percent or more completed. Imagine Charter Elementary At Camelback, Inc. Phoenix, AZ 85053 Award amount: $57,864 Imagine Charter Elementary At Camelback, Inc., reported that it used its Recovery Act Title I award to install technology to aid in tutoring activities for at-risk students. These funds targeted 60-70 students per week at one location. Specifically, the funds were used to install six interactive whiteboards in Title I pull-out rooms and Title I tutoring rooms. The funds were also used to provide professional development for teachers providing the tutoring services. As a result of these Title I funds, officials reported that they hope to see an increase in their standardized testing. They indicated that their Recovery Act Title I award activities were fully completed. Integrated Design Electronics Academy Washington, DC 20019 Award amount: $228,868 Integrated Design Electronics Academy reported that it used its Recovery Act Title I award to retain staff. These funds targeted one school with 450 students. Specifically, the funds were used to retain teachers. As a result of these Title I funds, officials reported that the school was able to save 12 instructional positions and maintain the current student-teacher ratio. They also said that the funds resulted in improved scores on standardized tests and increased graduation rates. Officials indicated that their Recovery Act Title I award activities were 50 percent or more completed. Irvington Community School Indianapolis, IN 46219 Award amount: $251,501 Irvington Community School reported that it used its Recovery Act Title I award to improve student achievement in computation skills and comprehension of nonfiction texts and writing skill, and to provide intensive, targeted interventions to students in all grade levels in order to improve achievement in all academic areas. These funds supported four staff positions at the K-8 building, which serves approximately 420 students and one staff position at the high school which serves approximately 280 students. Specifically, the funds were used to retain two current staff members, and ICS’s Lead Teacher and Math Coach and hire three new staff members, a math aide, a literacy aide, and an aide at the high school. These Title I funds were also used to support programs and services including parental involvement, professional conferences for teachers, assessment materials, and curriculum materials for use by the Title I team. Two “family nights” were hosted in order to get parents involved in literacy and math activities, and the Fountas and Pinnell diagnostic system was purchased to use in assessing students’ reading abilities and plan instruction. As a result of these funds, officials reported that the school was able to implement a Response to Intervention program to address the individual needs of each student. The school also expects to increase student achievement on standardized and norm-referenced tests, improve student performance in classrooms, reduce retention rates, and achieve and maintain an acceptable graduation rate for their students. Officials also said that use of the funds is intended to improve teacher performance, provide teachers with a variety of instructional strategies for differentiated instruction, provide parents with additional resources for supporting their children’s education at home, and bridge the home-school connection. Officials indicated that their Recovery Act Title I award activities were less than 50 percent completed. Jefferson County Public Schools Louisville, KY 40232 Award amount: $33,736,253 Jefferson County Public Schools reported that it used its Recovery Act Title I award to move students to proficiency in reading and math. These funds supported 97 schools serving approximately 49,000 students. Specifically, the funds were used to retain full-time staff and/or hire part-time staff; purchase technology items, such as SmartBoards; and purchase books or other reading and math items. As a result of these funds, officials reported that the district was able to save over 31 teacher positions, and 11 instructional assistant positions, and hire over 44 retired teachers to work with students in small groups. They also expect scores on standardized tests to improve. Officials indicated that their Recovery Act Title I award activities were less than 50 percent completed. Katy Independent School District Katy, TX 77492 Award amount: $2,914,931 Katy Independent School District reported that it used its Recovery Act Title I award to improve student performance, build capacity of instructional staff, enhance digital learning, and maintain high-quality English as a Second Language/bilingual staff and a safe, comfortable learning environment. These funds supported 20 campuses that serve approximately 13,429 students. Specifically, the funds were used to provide professional development and purchase technology and instructional materials. The funds were also used for supplemental tutorials, parent involvement activities, and staff retention. As a result of these Title I funds, officials said that the district was able to increase student achievement and parent involvement. They also said that these funds resulted in highly effective teachers, more teachers who were trained for English language learners, increased use of digital tools to enhance instruction, and an improvement in program effectiveness and the quality of services. Officials indicated that their Recovery Act Title I award activities were less than 50 percent completed. KIPP Austin Public Schools, Inc. Austin, TX 78724 Award amount: $154,743 KIPP Austin Public Schools, Inc., reported that it used its Recovery Act Title I award to add staff, provide professional development, and purchase technology. These funds supported three schools and approximately 630 students. Specifically, the funds were used to create two instructional coaching positions in math and science and purchase a new software platform for collecting and analyzing student data. As a result of these Title I funds, officials reported that the school was better able to access data. They also said these funds resulted in improved results on state and national assessments. Officials indicated that their Recovery Act Title I award activities were less than 50 percent completed. Lakeview Community Schools Columbus, NE 68601 Award amount: $65,274 Lakeview Community School reported that it used its Recovery Act Title I award to retain the reading coach position in the district. These funds supported the reading coach, who serves both elementary schools and approximately 300 students. Specifically, the funds were used to retain the reading coach at the elementary level. As a result of these funds, officials reported that the district was able to maintain and improve upon their reading skills for all students, especially for those students who are English language learners. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Legacy Education Group Mesa, AZ 85207 Award amount: $56,622 Legacy Education Group reported that it used its Recovery Act Title I award to increase technology and classroom teaching materials. These funds supported one K-8 charter school. Specifically, the funds were used to create a position that is responsible for data-driven decision-making processes. As a result of these funds, officials reported that the school was able to improve student performance. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Medford School District 549C Medford, OR 97501 Award amount: $2,185,314 Medford School District 549C reported that it used its Recovery Act Title I award to retain teaching staff and other resources to continue serving children's educational programs and needs. These funds supported seven elementary schools in the district serving approximately 3,500 students. Specifically, the funds were used to retain teaching personnel. As a result of these funds, officials reported that the district was able to retain approximately 77 full-time-equivalent positions, roughly maintain student- teacher ratios, and limit the number of budget-cut days for the 2010 school year. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Memphis City School District Memphis, TN 38112 Award amount: $57,244,262 Memphis City School District reported that it used its Recovery Act Title I award to improve academic achievement. These funds supported approximately 100,000 students in over 175 schools. Specifically, the funds were used for intervention initiatives, to retain and hire staff, provide professional development for instructional staff, and purchase student instructional materials for project base learning. As a result of these funds, officials reported that the district was able to retain more than 180 pre-K positions and more than 35 district-level instructional support positions. They also said that the funds resulted in improved student scores on standardized tests by adding more than 120 staff positions for academic intervention. District officials reported that their Recovery Act Title I activities were less than 50 percent completed. Milford Exempted Village School District Milford, OH 45150 Award amount: $346,795 Milford Exempted Village School District reported that it used its Recovery Act Title I award to improve student achievement, particularly in reading and math. These funds supported approximately 1,000 at-risk students, but could affect all 6,400 students in the district. Specifically, the funds were used to add instructional staff and purchase materials and software for the at-risk students. The funds were also used to provide professional development in reading and math strategies. As a result of these funds, officials reported that the district was able to create one part-time reading coach position and four part-time teacher positions. They also said they expect improvement in achievement scores for all student subgroups in reading and math. They indicated that their Recovery Act Title I award activities were less than 50 percent completed. Milford School District Milford, CT 06460 Award amount: $377,262 Milford School District reported that it used its Recovery Act Title I award to enhance student achievement at the middle school level in math and English. These funds supported approximately 10 percent of the student enrollment. Specifically, the funds were used to hire morning and afternoon staff to provide math and English instruction through morning and afternoon programs to students who were not proficient. As a result of Title I funds, officials reported that they expect results on the Connecticut Mastery test to improve, with more students achieving proficiency. District officials indicated that their Recovery Act Title I award activities were 50 percent or more completed. Muncie Community Schools Muncie, IN 47304 Award amount: $2,496,075 Muncie Community Schools reported that it used its Recovery Act Title I award to save and create staff positions, purchase technology, and fund professional development. These funds supported seven elementary schools serving approximately 2,940 students in grades K through 5. Specifically, the funds were used for professional development and supplies to implement the school improvement initiative. Funds were also used to purchase technology, such as computers and SmartBoards. As a result of these funds, officials reported that the district was able to save and create a total of four positions that include an interventionist and three data coaches. They also said that the funds resulted in improved student achievement. Officials indicated that their Recovery Act Title I award activities were less than 50 percent completed. Neenah School Neenah, WI 54956 Award amount: $376,149 Neenah School reported that it used its Recovery Act Title I award to provide professional development for teaching staff, create an instructional coach position, provide time for staff to analyze data, and hire a facilitator to assist in the analysis. These funds targeted six campuses, but because of the nature of the fund use, all or nearly all 6,500 students in the district were directly or indirectly affected. Specifically, the funds were used to create one Response to Intervention (RTI) instructional coach position, provide RTI professional development for instructional staff, and contract with a personal services facilitator to help with the data analysis and interpretation. As a result of these funds, officials reported that the district was able to improve its approaches to and techniques for teaching, which should have a positive impact on student achievement. They indicated that their Recovery Act Title I award activities were 50 percent or more completed New Foundations Charter School Philadelphia, PA 19136 Award amount: $401,559 New Foundations Charter School reported that it used its Recovery Act Title I award to improve science, technology, and special education. These funds supported approximately 575 students at one school. Specifically, the funds were used to purchase technology, such as SmartBoards for classrooms, graphing calculators, and a software program called Read 180. Additionally, the funds were used to hire technology support personnel, provide professional development, and purchase instructional materials, such as FOSS (Full Option Science System) science materials. As a result of these funds, officials reported that the school was able to improve student outcomes on standardized tests and hire one technology support personnel. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Northeastern Clinton Central School District Champlain, NY 12919 Award amount: $119,554 Northeastern Clinton Central School District reported that it used its Recovery Act Title I award to improve literacy instruction and enhance state test results. These funds supported three schools in a 1,400-student district. Specifically, the funds were used to retain staff. As a result of these funds, officials reported that the district was able to create a new position when a current employee became the literacy coach and hire a replacement. They indicated that their Recovery Act Title I award activities were less than 50 percent completed. Orange County Public Schools Orlando, FL 32801 Award amount: $29,879,628 Orange County Public Schools reported that it used its Recovery Act Title I award to provide supplemental services to students to ensure they make continuous academic improvement. These funds supported all 70 Title I schools and approximately 42,000 students. Specifically, the funds were used to retain reading coaches at all 70 schools, 50 math and science coaches, 19 social workers, 11 Student Assistance Family Empowerment coordinators at middle schools, and a guidance counselor at the Juvenile Assessment Center. As a result of these funds, officials reported that the district expects increased student performance on standardized reading, math and science exams and anticipates improved graduation rates. They indicated that their Recovery Act Title I award activities were fully completed. Paintsville Independent Schools Paintsville, KY 41240 Award amount: $240,013 Paintsville Independent Schools reported that it used its Recovery Act Title I award to improve student services and instruction. These funds supported one school with 400 students. Specifically, the funds were used to retain staff and programs, add technology to classrooms, and provide professional development to staff. As a result of these Title I funds, officials said the district was able to save four Title I teacher positions, improve instruction, and improve student academic results. District officials indicated that their Recovery Act Title I award activities were 50 percent or more completed. San Antonio Can! High School Dallas, TX 75208 Award amount: $2,099,018 San Antonio Can! High School reported that it used its Recovery Act Title I award for retaining instructional positions schoolwide that would have been lost without these funds. Specifically, the funds were used to retain one instructional staff position. As a result of these Title I funds, officials reported that the school was able to maintain its low student-teacher ratio and increase the number of graduates by 12. They also said that they are expecting additional graduates after the July Texas Assessment of Knowledge and Skills (TAKS) administration. They indicated that their Recovery Act Title I award activities were less than 50 percent completed. San Leandro Unified School District San Leandro, CA 94579 Award amount: $607,453 San Leandro Unified School District reported that it used its Recovery Act Title I award to hire staff, get intervention opportunities, purchase materials and equipment to accelerate support for student learning, and fund programs to increase Adequate Yearly Progress in all significant subgroups. These funds supported approximately 1,300 Title I students at five elementary sites. Specifically, the funds were used to retain and hire staff, provide professional development, and purchase instructional materials. As a result of these funds, officials said the district was able to increase standardized test scores. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Santa Ana Unified School District Santa Ana, CA 92701 Award amount: $11,429,961 Santa Ana Unified School District reported that it used its Recovery Act Title I award to maintain positions and staff development programs. These funds supported 62 schools with approximately 54,000 students. Specifically, the funds were used to retain staff both certificated and classified and to pay for the salaries of teachers on special assignments who support teachers through staff development, demonstration lessons, and the coordination of instructional materials. As a result of these Title I funds, officials reported that the district was able to maintain 62 special assignment teachers. They indicated that their Recovery Act Title I award activities were fully completed. Wiggins School District RE-50J Wiggins, CO 80654 Award amount: $57,733 Wiggins School District RE-50J reported that it used its Recovery Act Title I award to hire a math coach. These funds supported 55 students in the middle school in addition to teachers the math coach worked with. As a result of these funds, officials reported that they expect to see student math test scores improve. They indicated that their Recovery Act Title I activities were more than 50 percent completed. Scottsdale Unified District Phoenix, AZ 85018 Award amount: $2,352,308 Scottsdale Unified District reported that it used its Recovery Act Title I award to strategically fund positions, activities, and items that will help all students to improve academically. These funds supported over 5,500 Title I students in seven schoolwide and two targeted Title I programs. Specifically, the funds were used to fund academic intervention specialists at several Title I schools, provide funding for items needed to expand Title I pre-K programs, and purchase instructional materials, supplies, and software to support interventions at Title I schools. Funds were also used to support ongoing professional development by funding instructional coaches at several Title I schools and by funding the registration and travel fees associated with Title I teachers and administrators attending professional development workshops. As a result of these funds, officials reported they expect to see improvement in student academic achievement and see increases in student state test scores. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Somerset Independent Schools Somerset, KY 42502 Award amount: $427,661 Somerset Independent Schools reported that it used its Recovery Act Title I award to serve students in middle grades, where services were not provided previously. These funds targeted one middle school that serves approximately 350 students. Specifically, the funds were used to retain a math resource teacher, a reading resource teacher, and an instructional assistant and to provide Response to Intervention (RTI) services. As a result of these funds, officials reported that the district was able to save three positions that would have been lost. They also said they expect an increase in scores on state tests. Officials indicated that their Recovery Act Title I award activities were 50 percent or more completed. Summit Academy of Alt Learners Akron, OH 44305 Award amount: $59,898 Summit Academy of Alt Learners reported that it used its Recovery Act Title I award to upgrade its instructional materials. These funds covered a complete replacement and replenishment of instructional materials for all learners. Specifically, the funds were used to purchase new literacy, math, science and social studies materials, which are fully integrated with computer support. As a result of these Title I funds, officials reported that they expect to see improved scores on state tests, improved attainment of individualized education programs (IEP) goals, and more engagement on the part of their reluctant learners. School officials reported that their Recovery Act Title I award activities were 50 percent or more completed Susquehanna Township School District Harrisburg, PA 17109 Award amount: $225,856 Susquehanna Township School District reported that it used its Recovery Act Title I award to purchase new materials, add additional courses, and provide staff development. These funds supported two schools with a total of 112 teachers and 1,257 students. Specifically, the funds were used to provide additional staff development to the teachers in positive discipline and instructional strategies. As a result of these funds, officials reported that the district was able to improve scores on standardized tests, decrease problems with student discipline, and increase student attendance. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Texas School for the Deaf Austin, TX 78704 Award amount: $72,743 Texas School for the Deaf reported that it used its Recovery Act Title I award to enhance instructional and student support services for its deaf and hard-of-hearing students, increase the number of parents and teachers that have access to school data, and increase the number of teachers receiving professional development. These funds covered 165 professional development workshops for teachers and targeted the parents of the 552 students and 427 teachers and staff. Specifically, the funds were used to provide additional assistive technology equipment, such as adaptive keyboards and touch screen monitors, as well as graphing calculators for high school students, and instructional hardware and software. The school has also purchased internal student information system software for individualized education program management. As a result of these funds, officials reported that the school was able to improve parent and staff access to instructional information and improve the number of highly qualified staff. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Volusia County Schools Deland, FL 32720 Award amount: $15,267,330 Volusia County Schools reported that it used its Recovery Act Title I award to increase the capacity for schools to close the achievement gap between subgroups. These funds supported all students at 65 schools consisting of 37 elementary schools, 9 middle schools, 1 middle/high school, 3 high schools, 11 alternative schools, and 4 charter schools. Specifically, the funds were used for academic coaches for staff development on best practices identified by the state, the expansion of the AVID (Advancement Via Individual Determination) Program in secondary schools, dedicated teaching professionals for intervention and intensive instruction for low- performing students, expanded classroom libraries, and supplemental educational materials to enhance the core instruction. As a result of these funds, officials reported that the district was able to retain or create 427 jobs to direct every AVID student toward the appropriate path to graduation. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. Yuma 1 School District Yuma, CO 80759 Award amount: $97,899 Yuma 1 School District reported that it used its Recovery Act Title I award for teacher retention and the purchase of intervention materials and computers. These funds were spread across three schools serving approximately 350 students who had poor math or reading scores. Specifically, the funds were used to retain an intervention teacher and purchase two computers. As a result of Title I funds, officials reported that the district was able to retain one intervention specialist position and prevent larger intervention class sizes at the middle school. They indicated that their Recovery Act Title I award activities were 50 percent or more completed. To understand how the Office of Management and Budget (OMB) and Education facilitated implementation of Recovery Act requirements for recipients to describe the use of funds, we reviewed the Act for reporting requirements. We also reviewed reporting guidance established by OMB, the Recovery Board, and any supplemental guidance and technical assistance developed by Education for the three programs covered in our review. We met with OMB, Education, and Recovery Board officials to gain an understanding of the reporting requirements and systems. To assess the extent to which descriptions of awards transparently described how funds were being used, we utilized a GAO transparency assessment methodology developed for our May 2010 report on Recovery Act transparency. This assessment was based on the requirements of the Recovery Act; OMB’s guidance, including OMB’s Recipient Reporting Data Model; the Federal Funding Accountability and Transparency Act of 2006; and professional judgment. We considered descriptions of awards transparent if they conveyed, in a manner understandable to the general public, a basic understanding of the activities to be carried out and the expected outcomes. This effort was meant to be an assessment of transparency only with regard to the specific reporting fields we reviewed, not to Recovery.gov as a whole or to the Administration’s efforts to make this information available frequently and in a timely manner. In assessing transparency, we reviewed all prime recipient award records on Recovery.gov as of April 30, 2010, for the three education programs covered in this review. Similar to the transparency review we conducted in May 2010, we reviewed the required fields on Recovery.gov that describe the uses of Recovery Act funds, including project name, award description, and quarterly activities/project description. In addition to these fields, we reviewed the description of jobs created field. For this field, prime recipients were advised by Education to briefly describe the types of jobs created or retained. Education officials told us that this field may contain important information that would help the public understand how states are using their Recovery Act funds. In addition, as reported in our December 2009 bimonthly report, we found that creating and retaining jobs was the top use of funds for all three of the programs we reviewed. Finally, because the education programs in our review provide states with formula grants that state education agencies (SEA) pass through to LEAs, we also reviewed the number, location, and award amount of subawards reported on Recovery.gov by prime recipients. To apply our transparency criteria, we discerned if the information on Recovery.gov contained the following specific attributes: general purpose of the award (e.g., retaining funding for K-12 schools or nature of activities being conducted (e.g., purchases of educational technology and training of instructional support staff), location (where award activities are being conducted; e.g., school district or city), status (percentage complete), outcome (what is expected to be achieved; e.g., increased student achievement reflected by higher test scores), and scope (i.e., number of schools or students covered by project). Using these seven attributes and our professional judgment, we assessed information in the selected data fields for understandability, clarity, and completeness. Two analysts independently reviewed information on each award from the selected fields and then compared results to reach a consensus on whether the description fully met, significantly met, partially met, or did not meet the transparency criteria. If they could not agree, a third analyst reviewed the award information without regard to the original determinations and made a deciding assessment. Descriptions that were understandable, clear, and complete met our transparency criteria. Descriptions that contained information for almost all the attributes cited above (purpose, nature of activities, location, and so on) “significantly met” our transparency criteria, while those that contained some information were considered to “partially meet” our transparency criteria. Descriptions that contained little or no information did not meet our transparency criteria. Finally, for the recipient reports we reviewed, we performed a number of electronic edit checks on the awards for the prime recipients, including any associated subrecipients, to determine whether there were possible anomalies in the award information. We also discussed data reliability issues with OMB and Education to ensure data quality. In addition to the review described above, we met with federal officials and state and local officials responsible for recipient reporting in 15 states and the District of Columbia included in our bimonthly review to discuss the procedures for compiling and reporting information on Recovery Act funds and how information on awards is made available to the public. To obtain national-level information for our bimonthly review on how Recovery Act SFSF education stabilization funds; ESEA Title I, Part A funds; and IDEA Part B for school-aged children funds were used at the local level, we designed and administered a Web-based survey of local educational agencies (LEAs) in the 50 states and the District of Columbia. We surveyed school district superintendents across the country to learn how Recovery Act funding was used and what impact these funds had on school districts. Given that few descriptions fully met our transparency assessment, we included on this survey several questions related to how LEAs were using funds from these three programs. We conducted our survey between March and April 2010, with a 78 percent final weighted response rate. We selected a stratified random sample of 575 LEAs from the population of 16,065 LEAs included in our sample frame of LEAs obtained from Education’s Common Core of Data in 2007-2008. We selected a nongeneralizable subsample of 50 LEAs per education program we reviewed (150 LEAs total) to provide illustrative information on how LEAs are using their Recovery Act funds. We took steps to minimize nonsampling errors by pretesting the survey instrument with officials in 5 LEAs in January and February 2010. We did not determine whether federal agencies or prime recipients selected the awards discussed in this report to ensure that the awards met the requirements of the Act or whether the recipients met the Act’s eligibility requirements. We conducted this performance audit from February 2010 through July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As part of our work, we assessed the reliability of certain Recovery Act data that were pertinent to our effort. We determined that the data elements were sufficiently reliable for our purposes. Cornelia Ashby (202) 512-7215 or ashbyc@gao.gov. James Ashley, Edward Bodine, Karen Brown, Jessica Botsford, Amy Buck, Karen Febey, Hedieh Fusfield, Alexander Galuten, Bryon Gordon, Sonya Harmeyer, Sheila McCoy, Jean McSween, Elizabeth Morrison, James Rebbe, Catherine Roark, Crystal Robinson, Beverly Ross, Susan Sachs, Michelle Verbrugge, Charles Willson, and Sarah Wood made significant contributions to this report. Recovery Act: Increasing the Public’s Understanding of What Funds Are Being Spent on and What Outcomes Are Expected, GAO-10-581, Washington, D.C.: May 27, 2010. Electronic Government: Implementation of the Federal Funding Accountability and Transparency Act of 2006, GAO-10-365, Washington, D.C.: March 12, 2010. Recovery Act: States’ and Localities’ Uses of Funds and Actions Needed to Address Implementation Challenges and Bolster Accountability, GAO-10- 604 (Washington, D.C.: May 26, 2010). Recovery Act: States’ and Localities’ Uses of Funds and Actions Needed to Address Implementation Challenges and Bolster Accountability (Appendixes), GAO-10-605SP (Washington, D.C.: May 26, 2010). Recovery Act: One Year Later, States’ and Localities’ Uses of Funds and Opportunities to Strengthen Accountability, GAO-10-437 (Washington, D.C.: Mar. 3, 2010). Recovery Act: Status of States’ and Localities’ Use of Funds and Efforts to Ensure Accountability, GAO-10-231 (Washington, D.C.: Dec. 10, 2009). Recovery Act: Status of States’ and Localities’ Use of Funds and Efforts to Ensure Accountability (Appendixes), GAO-10-232SP (Washington, D.C.: Dec. 10, 2009). Recovery Act: Funds Continue to Provide Fiscal Relief to States and Localities, While Accountability and Reporting Challenges Need to Be Fully Addressed (GAO-09-1016, Washington, D.C.: Sept. 23, 2009). Recovery Act: Funds Continue to Provide Fiscal Relief to States and Localities, While Accountability and Reporting Challenges Need to Be Fully Addressed (Appendixes) (GAO-09-1017SP, Washington, D.C.: Sept. 23, 2009). Recovery Act: States’ and Localities’ Current and Planned Uses of Funds While Facing Fiscal Stresses, GAO-09-829 (Washington, D.C.: July 8, 2009). Recovery Act: States’ and Localities’ Current and Planned Uses of Funds While Facing Fiscal Stresses (Appendixes), GAO-09-830SP, Washington, D.C.: July 8, 2009. Recovery Act: States’ and Localities’ Current and Planned Uses of Funds While Facing Fiscal Stresses, GAO-09-831T (Washington, D.C.: July 08, 2009). Recovery Act: As Initial Implementation Unfolds in States and Localities, Continued Attention to Accountability Issues Is Essential (GAO-09-580, Washington, D.C.: Apr. 23, 2009). Recovery Act: As Initial Implementation Unfolds in States and Localities, Continued Attention to Accountability Issues Is Essential (GAO-09-631T, Washington, D.C.: Apr. 23, 2009). | The American Recovery and Reinvestment Act of 2009 (Recovery Act) provides $70.3 billion for three education programs--the State Fiscal Stabilization Fund (SFSF), Title I of the Elementary and Secondary Education Act (Title I), and Individuals with Disabilities Education Act (IDEA). The Act requires recipients to be accountable for how these funds are being used and what is being achieved. To help attain the level of transparency needed for accountability, recipients are to report quarterly on their award activities and expected outcomes. This information is available to the public on Recovery.gov, the government's official Recovery Act Web site. This report covers three Education programs funded by the Recovery Act. It (1) describes what the Office of Management and Budget (OMB) and the Department of Education (Education) did to facilitate implementation of requirements for recipients to describe the use of funds and (2) assesses the extent to which award descriptions are transparent It also describes reported fund uses for a sample of subrecipients. GAO reviewed requirements for reporting in the Act as well as guidance provided by OMB and Education. GAO assessed the transparency of descriptions for the three education programs on Recovery.gov. Both OMB and Education provided guidance to recipients on how to meet the Recovery Act requirement that they report quarterly on the amount and use of the funds they have received. OMB's guidance was generic for all agencies and instructed recipients to report narrative information that captures the overall purpose of the award, describes projects or activities, and states the expected results. Education's guidance was supplemental and program specific to its formula grants that pass through states as the prime recipient to subrecipients, which are local educational agencies (LEA) and institutions of higher education. However, the Recovery Act reporting system does not provide specific narrative fields for collecting information on how each subrecipient is using the funds. Instead, the states are tasked with reporting on fund use throughout the state, and the reporting system limits the amount of narrative information states may enter. For states with many subrecipients, including detailed information on how each subrecipient is using the funds would be extremely challenging, if not impossible. To ease the reporting burden for prime recipients, Education's guidance provided recipients with suggested standard language for use in important narrative fields. GAO determined that 9 percent of the descriptions fully met our transparency criteria; that is, they had sufficiently clear and complete information on the award's purpose, scope and nature of activities, location, cost, outcomes, and status of work. Most descriptions did not include sufficient information on local fund use. Specifically, while 13 percent had most but not all information, the remaining 78 percent contained much less information and only partially met attributes for transparency. We did not find any descriptions that did not include at least some of the information needed to inform the public. Descriptions limited to Education's standard language were less transparent than those with specific information on the programs and activities subrecipients conducted in the state. For example, officials from seven Texas LEAs told us they used ESEA Title I Recovery Act funds for technology purchases for at-risk students, although the information in Texas' project description uses only the standard language. Guidance on reporting requirements for Recovery Act grants that pass through a prime recipient to a subrecipient should balance the need for transparency with the reporting burden and these system limitations. While most states cannot provide information on how each subrecipient is using its funds, providing more information than Education's standard language, such as an overview analysis of how localities are spending the funds, could help the public gain a better understanding of how the funds are being used. GAO recommends that the Secretary of Education, in consultation with OMB, remove the suggested language for the project description field from its guidance and instruct states to include information, to the extent possible, on how the funds are being used and potential project outcomes or results. |
Mr. Chairman and Members of the Committee: I am pleased to have this opportunity to assist you in your continuing review of the Internal Revenue Service’s (IRS) Tax Systems Modernization (TSM). In March 1996, we appeared before this Committee to discuss the managerial and technical weaknesses of TSM. Today, our testimony focuses on IRS’ progress in achieving important programmatic aspects of its business vision for 2001 and how TSM supports that vision. In 1992, IRS developed a new business vision that was designed to address critical longstanding problems with its programs, such as the lack of accurate and readily accessible information on taxpayers, their accounts, and IRS operations, due in part to an antiquated tax return processing system that relies on labor-intensive, error-prone methods to process over 200 million, primarily paper, tax returns annually; taxpayer frustration in dealing with IRS as they seek to resolve tax law or account questions. These frustrations revolve around very low levels of telephone accessibility, confusing and hard-to-understand notices, and the need to repeatedly call or correspond with IRS to resolve tax issues; and a stagnant level of taxpayer compliance and a sizable inventory of accounts receivable. IRS’ business vision calls for addressing these problems through a series of organizational, business process, and technology changes, including TSM. Specifically, IRS’ vision calls for (1) moving from a paper-laden, labor-intensive tax return processing environment to a modern electronic environment; (2) providing better service to taxpayers through wider use of the telephone, better access to data, and new information systems; and (3) improving compliance through access to accurate, up-to-date data, earlier identification of noncompliant taxpayers, and increased efficiencies in its field enforcement functions. Initially, TSM was technology-driven rather than business-driven. As a result, important business requirements were not fully defined when early TSM projects were designed. IRS subsequently recognized that TSM could be an enabler for organizational and business process change and, in 1992, developed a business vision for 2001. Many of the requirements for this vision remain undefined today, jeopardizing the future success of both TSM and the successful accomplishment of IRS’ vision. IRS is far from achieving the operational benefits of its vision. Little progress has been made either in reducing the number of paper returns IRS processes or in delivering the new systems needed to better process paper returns. In May 1993, IRS established a goal to receive 80 million tax returns electronically by 2001. As we told this Committee in March, IRS lacks a comprehensive business strategy for achieving its 80-million goal. Recently, after spending about $270 million of a projected $1.3 billion on a Document Processing System (DPS) for scanning and imaging paper returns, and about $94 million on an interim service center scanning and imaging system known as SCRIPS, IRS began a project to reengineer its tax returns processing system. It is too early to tell whether this new reengineering effort will have a significant impact on IRS’ ability to reduce the volume of paper returns it processes. IRS is beginning to implement aspects of its vision for improving customer service. IRS anticipates improving customer service by consolidating work sites, changing work processes, and putting in place new information systems. However, IRS must address several important managerial, technical, and human resource challenges to fully achieve that vision. IRS’ goal is to increase compliance from 87 percent to 90 percent by 2001. However, IRS established that goal on a set of assumptions that have changed significantly. For example, IRS no longer plans to reinvest staff savings from TSM into compliance activities. Thus, compliance activities are likely to have fewer staff than IRS envisioned when it established the 90-percent goal. Also, IRS no longer has a specific plan for obtaining more up-to-date information for a new compliance research information system that IRS is developing as a part of TSM. We have questions about IRS’ ability to make sound investment decisions on TSM until IRS completes the reengineering of important processes, such as its tax return processing system. The outcome of IRS’ reengineering efforts could generate new business requirements that are not addressed by TSM projects or that make some of those projects obsolete. In 1986, IRS initiated TSM primarily to replace the computers that it was using to process and store the information on tax returns. IRS planned to introduce the new technology without changing its existing organizational and operating structure, which included 10 service centers that processed tax returns, over 70 telephone call sites that provided various types of service to taxpayers, and 63 district offices that were responsible for many of IRS’ compliance activities. In June 1991, we testified before this Committee on important management challenges facing IRS as it moved to resolve longstanding problems with its programs, their effectiveness, and the quality of service provided to taxpayers. In discussing those challenges, we said that “computers are only tools to help achieve management’s vision of a future IRS; they are not a substitute for that vision.” We also said that TSM offered IRS the opportunity to rethink the way it does business and the way it is structured to do that business. In 1992, in response to our and others’ recommendations, IRS began to analyze how it might use new technology to change its business operations. Subsequently, IRS decided on a series of business process and organizational changes that it set forth in a business vision for 2001. These proposals envisioned dramatic changes in the way IRS did business, with the changes supported by a new organizational structure, new business processes, and new technology. The new vision depended on new technology to be the vehicle to resolve many longstanding problems that resulted from IRS managers and employees not having access to the information they needed in a timely fashion. But other equally dramatic changes were envisioned, specifically many fewer processing centers and customer service sites, a shift from correspondence to the telephone in communicating with taxpayers, and a focus on earlier identification and resolution of taxpayer problems and noncompliance. While IRS predicted that it would need many fewer staff to maintain existing work levels, it anticipated investing the staff savings made available from TSM back into its customer service and enforcement programs. telephone, went nationwide this year, and about 2.8 million people participated. Yet the total number of people filing their returns electronically remains below 1994 levels and far below what is needed to accomplish IRS’ electronic filing goals. Telephone accessibility was up this filing season—IRS assistors answered over a million more calls than last filing season due in part to better data availability—yet IRS was still able to answer only about 20 percent of the calls. IRS’ accounts receivable inventory remains on our high risk list. Congress cut IRS’ fiscal year 1996 TSM budget due in part to concerns about the value of TSM investments and IRS’ progress in delivering new systems. In response, IRS recently identified those aspects of its original business vision that it expects to accomplish by 2000 and those that will have to be delayed. We refer to this effort as IRS’ reassessment of TSM. Although IRS has not released information on which TSM projects will be continued and on what schedule, it appears the reassessment will affect IRS’ ability to resolve by the year 2001 many of the longstanding problems it faces. The remainder of our testimony provides more information on IRS’ business vision and IRS’ progress in achieving that vision. One of the biggest problems facing IRS is its antiquated, inefficient system for processing most tax returns. The system involves thousands of staff moving mountains of paper through several processing stages. It is a time-consuming, inefficient process that requires considerable effort just to correct errors made by IRS employees during the process. Storing and retrieving the paper returns involves further inefficiencies. It can take weeks, for example, for an IRS employee to retrieve a paper return from storage. IRS’ strategy for receiving and capturing data from tax returns was and still is a crucial component of IRS’ business vision. Initially, IRS’ strategy focused on replacing computers in its 10 service centers with more efficient ones. However, in 1992, IRS began examining other processing options. As a part of that analysis, IRS concluded that it had to make various organizational and business changes. Probably the most important business change was IRS’ decision to significantly increase the number of tax returns received electronically by 2001. receive 80 million returns by 2001, those initiatives are targeted at tax returns that are among the least costly paper returns to process. Furthermore, IRS has not yet successfully addressed one of the major impediments to the expansion of electronic filing—its cost to taxpayers. IRS’ current initiatives to increase electronic filing will not, in their entirety, bring IRS close to its 80-million goal. IRS has acknowledged that it lacks a comprehensive business strategy for achieving that goal and needs to rethink its overall approach for receiving and capturing tax return data. To that end, IRS recently began a reengineering project to identify strategies for significantly reducing its paper tax return filings. As noted earlier, IRS’ original TSM plans for receiving and capturing tax return data centered on replacing existing computers at its 10 service centers. Accordingly, in 1988, IRS began designing a Document Processing System (DPS) that would use imaging and optical character recognition technologies to process paper tax returns and capture 100 percent of the data on those returns (IRS now captures only about 40 percent of the data on paper returns). IRS planned to implement this system at all 10 service centers. In April 1992, we said that IRS had not adequately assessed the cost/benefit tradeoffs associated with its strategy for receiving and capturing tax return data using DPS. We said that two prerequisites for developing good information systems were an analysis of the business functional requirements and an identification of alternatives for meeting those requirements. We recommended that IRS develop a comprehensive analysis to determine the cost and benefits of alternative strategies for receiving and capturing tax return information. We said that IRS, as part of that analysis, should determine the impact of various electronic filing incentives on the requirements for imaging and optical character recognition. IRS proceeded with the development of DPS without this analysis but decided that DPS would be rolled out in 5 service centers instead of 10. IRS records show that it had spent about $270 million on DPS through fiscal year 1995. will be evaluating its need for an imaging and data-capture system. One important aspect of this evaluation will be a determination of how much tax return data IRS needs for compliance purposes and whether data needs vary by type of return. This analysis was not done when DPS was initially planned. In light of the ongoing evaluation of DPS, according to an IRS official, the pilot test of DPS that was scheduled for January 1997 has been delayed. IRS analysis of options for changing its processing system for tax returns and other paper tax documents, such as information returns, resulted in several recommendations. Probably the most important recommendation was one to increase the number of returns that IRS would receive electronically in 2001. The other recommendations focused on consolidating paper processing of tax documents at fewer service centers and providing a return-free filing capability for certain taxpayers. When TSM began in 1986, IRS assumed that it would eventually receive about 40 million electronic returns a year. After analyzing options for business change, IRS adopted a goal of 80 million electronic returns by 2001. Compared with IRS’ current procedures for processing paper returns, electronic filing has several benefits for IRS. These benefits include reduced processing, storage, and retrieval costs and faster, more accurate processing of returns and refunds. Since the inception of electronic filing in 1986, IRS’ marketing approach was to encourage tax return preparers to provide electronic filing in the hope that they would market the service to the general public. IRS’ rationale for this approach was based primarily on the large number of professional preparers—about 57 million tax returns for tax year 1993 were prepared by professional preparers. Because we saw the need for IRS to expand the appeal of electronic filing, we recommended in January 1993 that IRS identify additional market segments and specify strategies for attracting those segments to electronic filing. To that end, IRS developed a strategy that encompassed 21 initiatives for increasing the number of electronic returns. number on touch-tone phones. This year, about 2.8 million taxpayers used that filing method, known as TeleFile. However, the one initiative that IRS assumed would have the single most significant impact on electronic filing, generating 46 million electronic returns, has since been dropped. That initiative called for legislative mandates requiring that (1) preparers of 100 or more individual returns offer electronic filing and (2) businesses with 10 or more employees file their returns electronically. IRS dropped that initiative because IRS and Treasury officials believed there was little chance that Congress would pass such legislation. IRS estimates that it will receive 16 million returns electronically for 1996. To date, most of the returns being filed electronically are ones that, if filed on paper, could be filed on forms (like the 1040EZ) that are among the least costly paper returns to process. With that in mind, we recommended, in October 1995, that IRS identify those groups of taxpayers that offer the greatest opportunity to reduce IRS’ paper processing workload and operating costs if they filed electronically and develop strategies that focus on eliminating or alleviating impediments that inhibit those groups from participating in the program. The primary impediment we cited was the cost of electronic filing. To file electronically, taxpayers generally have to go through a tax return preparer or some other third party at a cost that typically ranges from $15 to $40. As we told this Committee in March, IRS has taken several actions that could result in future progress toward increasing the number of electronic returns. However, these initiatives have yet to culminate in a comprehensive strategy that will help IRS achieve its 80-million goal. information returns and tax deposit coupons, SCRIPS was expected to be processing all forms 1040EZ, 1040PC, and 941 (employment tax returns). Instead, SCRIPS is processing about 50 percent of the 1040EZs and none of the 1040PCs and 941s. As part of its vision for 2001, IRS planned to provide a return-free filing capability for a limited number of taxpayers by 2001. Under this system, the taxpayer would not have to file a tax return. IRS would calculate the tax liability and send the taxpayer either a bill or a refund. However, this capability depends on accelerated processing of information returns, such as wage and interest and dividend information submitted by third parties, so that IRS can determine the taxpayer’s liability and prepare a return for the taxpayer during the January through early March time frame. Currently, there is a 1-year lag between the time a taxpayer files a tax return and when IRS notifies the taxpayer that it has identified unreported income for that tax year. For example, in March 1996, IRS was sending out underreporter notices for returns filed in 1995. However, as a result of IRS’ reassessment of TSM, IRS does not plan to accelerate the processing of information returns to the extent needed to support return-free filing. Until it does so, return-free filing will not be an option. Our message regarding IRS’ progress in achieving its business vision for processing tax returns is really no different than it was in 1992—IRS’ strategy for returns processing needs to be based on a clear definition of its downstream business requirements for customer service and compliance and an analysis of the cost and benefits of providing those requirements under some of the different scenarios that IRS is currently considering as a part of its reengineering effort. Until such an analysis is completed, IRS has no assurance that its technology investments for submission processing are sound. alternatives include eliminating certain classes of tax returns, expanding eligibility for filing simple forms, and outsourcing the data capture function. Because this reengineering project is in its infancy, it is too early to determine whether the results will provide IRS with a clear definition of the functional requirements for its future returns-processing system. The foundation for this analysis needs to be a determination of the type of tax return data that IRS needs for compliance and customer service—something IRS says it is doing as part of its reevaluation of DPS. The second part of IRS’ business vision is to improve service to taxpayers. A key IRS goal is to resolve 95 percent of taxpayer inquiries after one contact. For service to improve, taxpayers must be able to reach IRS by telephone when they have questions or problems and IRS employees must have easy access to the information needed to help taxpayers. Taxpayers have long had problems reaching IRS by telephone. The percentage of taxpayers’ calls that IRS assistors answered decreased from 58 percent for the 1989 filing season to 8 percent for the 1995 filing season. Although the accessibility rate improved during the 1996 filing season, assistors were still only able to answer 20 percent of taxpayers’ telephone calls. And, even when a taxpayer gets through to IRS, the assistor does not always have easy access to the information needed to resolve the taxpayer’s problem. As a result, the assistor may have to either (1) refer the taxpayer to another office, (2) research the problem and call the taxpayer back, or (3) tell the taxpayer to call back later. IRS’ strategy for improving customer service includes consolidating work units, changing work processes, and increasing the use of or implementing new information systems. IRS’ strategy offers promise as it is designed to improve taxpayers’ ability to get assistance from IRS and to provide IRS employees easy access to information. However, IRS faces many challenges in implementing that strategy. IRS’ customer service vision calls for consolidating the work of different functional areas that do not have face-to-face interaction with taxpayers. other offices rather than those they initially contact. As a result, taxpayers may have to make several inquiries before locating an IRS office that can address their concern or question. Non face-to-face interaction with taxpayers has traditionally been done in at least 70 IRS organizational units in 44 locations. The customer service vision calls for consolidating the work of these 70 organizational units into 23 customer service centers. Customer service centers would absorb the functions of (1) toll-free taxpayer assistance sites, which answer calls about tax law and procedures, taxpayer accounts, and notices that taxpayers receive from IRS; (2) automated collection call sites, which contact taxpayers to secure delinquent tax returns and payments and answer calls from taxpayers who are the subject of collection actions; and (3) forms distribution centers, which handle requests for tax forms and publications. IRS has made some progress toward implementing the organizational changes. IRS has selected the locations for its customer service centers, developed a schedule for start-up operations, and formulated a plan for progressively expanding the workload of the new centers. Two customer service centers (Nashville and Fresno) are experimenting with new ways of providing customer service over the telephone. As of April 1996, IRS had partial customer service operations at 13 of its 23 sites. Of the 28 organizational units that are scheduled to close, 6 are closed. The remaining offices will be closed on a staggered schedule through 2002. IRS’ customer service vision emphasizes use of the telephone to interact with taxpayers. As such, IRS’ plans include actions directed at converting to telephone much of the work now being done by correspondence and at making it easier for taxpayers to reach IRS and resolve their problems by telephone. The Fresno prototype customer service center has experience in converting paper correspondence to the telephone. According to IRS, after it began including Fresno’s telephone number on some outgoing notices, the center’s correspondence receipts declined by 15 percent. Other customer service centers are testing a new toll-free telephone number that IRS added to certain account notices this year. In past years, those notices instructed taxpayers to write to IRS if they had any questions. IRS’ strategy for improving the accessibility of its telephone service calls for (1) extending its hours of operation, (2) improving its ability to route calls, (3) increasing the use of interactive systems, and (4) reducing demand for assistance. First, office hours would be extended to 20 hours a day during the week and 8 hours each day on the weekend. Also, taxpayers would have access to interactive systems 24 hours a day. Starting in January 1995, by routing calls among some call sites and extending the hours of others, IRS enabled taxpayers nationwide to call IRS from 7:30 a.m. to 5:30 p.m. weekdays—an additional 2 hours of service than in the past. The second part of IRS’ strategy for improving telephone accessibility calls for enhancing IRS’ ability to route taxpayer calls nationwide to those locations that have employees available to answer taxpayers’ questions. Early in 1995, IRS installed automated call distributors that can send calls to other locations where IRS employees are available to answer questions. However, IRS currently routes calls using a “bottom up” approach—i.e., the call site notifies the cognizant regional office when it is overloaded, and the regional office then notifies the National Office. On the basis of daily trend data, the National Office sends the calls to other call sites not thought to be busy. National Office staff manually log the change and log it into a terminal. After this process, the change can be operational within 15 minutes to 1 hour later. However, by the time the National Office responds, the overload situation may have subsided or callers may have simply abandoned their calls. As part of its customer service vision, IRS hopes to have a “top down” approach to call routing using real-time data in 1997. This capability depends on certain technology and establishment of a National Command Center that will have access to real-time call volumes for all customer service centers. Increasing the use of interactive systems is the third part of IRS’ strategy to expand telephone service. Specifically, IRS expects that 45 percent of all taxpayers’ calls will be resolved through interactive systems. These systems are to allow taxpayers to get answers to their questions and complete certain transactions, such as making tax payments or entering into installment agreements, without talking to an IRS employee. Overall, IRS expects to have 30 or more of these systems available to taxpayers by 2000. As of January 1996, IRS had developed and tested three such systems and had rolled-out one of them to seven locations. Four more interactive telephone systems are scheduled to be tested in September 1996. We recently reported that the three interactive telephone systems that IRS has tested were difficult for taxpayers to use because IRS’ telephone routing system (1) required taxpayers to remember up to eight menu options when the design guidelines called for no more than four options and (2) did not allow taxpayers to return to the main menu when they made a mistake or wanted to resolve other issues. We recommended that IRS assess the various menu options and take actions to overcome the problems caused by too many options, including using multiple toll-free numbers and providing taxpayers with a written step-by-step description of how to use the interactive systems’ menus. In response to our recommendation, IRS plans to further test telephone menu options and interactive telephone systems to determine taxpayers’ needs and their ability to use the system easily. The clarity of menu options will be even more critical as IRS plans to expand its use of interactive systems. The final part of IRS’ strategy is to reduce the need for taxpayers to call IRS. IRS plans to do this in several ways. In the near term, demand on IRS’ customer service centers will be reduced by eliminating unnecessary notices. In that regard, as part of a recent notice reengineering project, IRS decided to eliminate certain notices. When the recommendations from the reengineering effort are fully effective, in fiscal year 1997, IRS expects to be issuing almost 46 million fewer notices annually to taxpayers. By eliminating those notices, IRS expects to receive 9 to 10 million fewer telephone calls from taxpayers. In the longer term, IRS plans to reduce demand by successfully responding to more taxpayer issues with only one contact. According to IRS, this will require its assistors to have better quality information and tools at their disposal. As discussed in the next section, some progress has been made, but the systems IRS needs to accomplish this goal remain in development. In addition to organizational and work process changes, IRS’ customer service vision depends on increasing the use of and implementing new information systems. assisting taxpayers, known as the Integrated Data Retrieval System (IDRS), was designed in the 1960s. Until 1995, account information in IDRS was spread among 10 service centers and employees in each center had access to information on only a small percentage of the IDRS accounts. When an employee did not have access to the account information needed to respond to a taxpayer’s question, the employee typically wrote down the question and mailed it to the location that had access to the information whose staff would then respond to the taxpayer. Early in 1995, IRS implemented a networking capability among the 10 service centers so that employees could have access to IDRS data nationwide. This networking capability is referred to as Universal IDRS. Although Universal IDRS gives IRS employees access to taxpayer account information nationwide, IDRS does not always contain complete information on a taxpayer’s account. Other information needed to help the taxpayer may reside in different systems that are not linked to IDRS. For example, an IRS employee using IDRS will know that a taxpayer was sent an underreporter notice, but would not have access to the actual notice. That notice is contained in IRS’ Automated Underreporter system. The notice would provide additional information, such as the amount of unreported income and information return data that may indicate, for example, the amount of dividend or interest reported by financial institutions but not by the taxpayer. To resolve these kinds of problems, IRS eventually intends to provide its employees with access to greater amounts of on-line taxpayer data in shorter time frames than those for the current IDRS data. This capability is to be delivered when IRS implements two TSM projects—the Corporate Accounts Processing System (CAPS) and the Workload Management System (WMS). CAPS is to be the main repository of taxpayer account data, and WMS is to track and manage all open account issues for a taxpayer. These projects are scheduled to be implemented in 1999. number of important databases available to them when they talk to the taxpayer. This is key to meeting IRS’ customer service goals. IRS plans to deliver ICP in four software increments. The first software increment consists of eliminating the need for IRS employees to use multiple workstations to access data on individual taxpayers from different information systems. As of February 1996, the first increment of ICP was partially deployed at 13 of the 23 customer service centers. The next ICP software increment is being designed to provide enhancements over the first increment. Some of the enhancements include consolidating the information from multiple systems onto a single standard screen and providing IRS with the capability to route calls to the most skilled IRS employee who is available at the time of a taxpayer’s call. Later versions are expected to provide this same level of access to information for business taxpayers. IRS’ strategy for improving customer service offers promise as it is designed to improve taxpayers’ ability to get assistance from IRS and provide IRS employees access to the information they need to help taxpayers. However, IRS faces important managerial, technical, and human resource challenges to fully achieve its customer service vision. Specifically, it has to manage the transition to the customer service vision while continuing to meet the current workload for providing answers to taxpayer inquiries, managing taxpayer accounts, and collecting unpaid taxes. IRS also has to determine the scope of responsibilities for those staff employed at customer service centers and provide the requisite training for that staff. IRS also has to develop the information systems necessary to support the accomplishment of its vision, including interactive telephone systems that are easy for taxpayers to use. The third major part of IRS’ vision is to increase compliance. According to IRS, compliance levels have remained at 87 percent for the last several years. IRS estimates that each percentage point increase in compliance could generate billions of dollars in revenue. In addition, IRS is faced with an inventory of collectible tax debts that, according to IRS estimates, was about $46 billion as of September 30, 1995. IRS’ goal is to increase compliance to 90 percent by 2001 through improved voluntary compliance and enforcement. However, it is unclear how IRS expects to achieve that goal, especially considering some of the changes since the goal was established. Since then, for example, IRS (1) has begun reassessing its data needs and revised its plans to capture 100 percent of the data on tax returns; (2) has postponed indefinitely the Taxpayer Compliance Measurement Program (TCMP), which has been IRS’ primary program for obtaining comprehensive and reliable taxpayer compliance data since the 1960s; (3) no longer anticipates being able to do up-front matching of tax returns and information returns, at least until sometime after 2000; and (4) has abandoned its assumption that staff-year savings from modernization would be reinvested in front-line customer service and compliance positions. Achievement of IRS’ compliance goal hinges on the ability of enforcement staff to readily access good data. For example, as we discussed in recent testimony on IRS’ debt collection practices, existing IRS computer systems do not provide ready access to needed information and, consequently, do not adequately support modern work processes. Access to current and accurate information on tax debts is essential if IRS is to enhance the effectiveness of its collection tools and programs to optimize productivity, devise alternative collection strategies, and develop programs to prevent taxpayers from becoming delinquent in the first place. Although technology plays a key role in helping an organization collect good data and make it readily accessible to employees, it is critical that the organization first determine what data it needs. IRS has not yet identified all of the data that enforcement staff need to do their job. the individual income tax return (Form 1040) to determine what is and is not needed. It is also important that any data IRS captures, whether 40 percent or 100 percent of the universe, be easily accessed by staff who need it. In that regard, IRS officials told us that enforcement staff are not able to readily access the data that IRS is now capturing. Like Customer Service, IRS’ enforcement functions should benefit from the eventual replacement of the current master files with CAPS and WMS. Data are also critical to IRS’ new approach for researching ways to improve compliance. IRS has traditionally responded to noncompliance through audits and other enforcement efforts. Over time, IRS concluded that enforcement was essential to pursue intentional noncompliance but that improved taxpayer assistance and education, rather than enforcement, might be more appropriate for correcting unintentional noncompliance. With this in mind and concerned about noncompliance levels, IRS created a compliance research and analysis approach in 1993, with the intent of identifying noncompliant market segments and appropriate enforcement and nonenforcement efforts to address that noncompliance. IRS’ major research tool is to be the Compliance Research Information System (CRIS). Plans call for CRIS to be an integrated network of databases containing a sample of internal, external, and multi-year data, accessible to national and district office personnel to support analyses of voluntary compliance levels. CRIS is expected to enable IRS to develop working hypotheses on the means to increase voluntary compliance, test hypotheses, evaluate the results, and make decisions on how to implement the new strategies. IRS may not have objective compliance data available when needed for its research efforts. In October 1995, IRS indefinitely postponed TCMP due to budget and taxpayer burden concerns. TCMP has been IRS’ primary program for obtaining comprehensive and reliable taxpayer compliance data since the 1960s. IRS has not done a TCMP of individual income tax returns since 1988. With the postponement of TCMP, IRS lacks current measures on compliance and does not have the data it needs to determine which market segments to research on ways to correct noncompliance. As we discussed in an April 1996 report to the Commissioner of Internal Revenue, although IRS plans to mitigate the data losses resulting from the postponement of TCMP, it has no specific proposal on how to accomplish this. IRS’ original vision assumed that compliance efforts would be enhanced by more timely issue identification and resolution, facilitated in part by accelerating the matching of tax return data with data provided by third parties, such as banks and employers. The ultimate goal was to achieve up-front matching whereby data are received and processed soon enough to allow matching with the tax return while the return is being processed and before any refund is issued. IRS has been accelerating the matching process, but the first notice to taxpayers advising them of any discrepancy is still not sent out until about a year after the tax return was filed. According to IRS officials, IRS eventually wants to be able to send out notices in the same year the return was filed. Besides increasing the likelihood of contacting the taxpayer and resolving the case, sending the notice the same year the tax return is filed might help taxpayers avoid the same mistake on the following year’s return. It is not clear when IRS will be able to do up-front matching. Data and the technology that provides it are critical, but so are the people who are tasked with using the data. IRS already has tens of thousands of staff who work in areas, such as taxpayer service, examination, and collection, that can affect compliance levels. Until recently, IRS had assumed that staff savings resulting from modernization (such as the savings anticipated in the returns processing function) would be reinvested to provide more of those front-line staff, with a corresponding increase in revenues. That is no longer the case, at least not to the extent originally anticipated. According to IRS officials, one of the assumptions surrounding its recent reassessment of TSM was that IRS would be smaller and could not rely on reinvesting TSM savings. We have not quantified the implications of this change in staffing assumptions. It is clear, however, that IRS’ success in increasing compliance is directly related to the number of staff involved in compliance-related activities and that any significant change in staffing could significantly affect IRS’ ability to achieve 90-percent compliance goal. IRS could mitigate that effect, at least somewhat, by making sure that is has the right mix of staff. made procedural changes to speed up its collection process, but historically IRS has been reluctant to reallocate resources from the field to earlier, more productive, collection activities. IRS’ fiscal year 1997 budget request states that, although traditional enforcement positions (which include revenue officers) “comprise the lion’s share of IRS’ enforcement efforts, they also represent on the margin the least efficient use of IRS resources.” In that regard, the budget request provides for an increase in staff for IRS’ telephone collection activities and a decrease in revenue officers—a shift toward the kind of mix that we have advocated in the past. Another way to mitigate the effect of fewer-than-expected staff is to improve staff productivity. In that regard, one of IRS’ efforts to improve compliance involves the automation of certain tasks done by enforcement staff in IRS’ district offices. These tasks, like many in IRS, have for years involved the manual processing of paper, which has resulted in enforcement staff spending significant amounts of time on routine administrative duties. IRS has been implementing systems that are designed to ease this burden and help make enforcement staff more productive. The Integrated Collection System (ICS) is a computer-based information system that is intended to automate some of the labor-intensive tasks performed by revenue officers. Although this effort is not a major technological advancement, it should enable revenue officers to spend their time more productively. According to IRS, implementing this system in two pilot districts resulted in increased collections, faster case closing, and less time spent on each case. The system is currently operating in six districts, and IRS plans to roll it out in three more districts this year. According to IRS, further implementation depends on future funding and final measurements of productivity. IRS is also developing an automated inventory delivery system that is intended to direct accounts, based on internally developed criteria, to the particular collection stage where they can be processed most efficiently and expeditiously. This system, which IRS plans to test in July 1996, is intended to move accounts through the collection process faster and cheaper than under the current system. IRS offices. However, IRS recently decided not to continue funding TIES through TSM. It is our understanding that any future funding will be done outside of TSM. According to IRS, the features of ICS and TIES will eventually be incorporated into the Integrated Case Processing (ICP) system, which will also provide IRS’ compliance function with automated tools for case assignment and tracking. However, as a result of the recent reassessment of TSM, IRS has decided to delay that integration until after the year 2000. In March 1996, we told the Subcommittee on Oversight, House Committee on Ways and Means, that additional investments in TSM are at risk given current managerial and technical weaknesses. Those were weaknesses that we discussed in our July 1995 report on TSM. The Department of the Treasury is expected to report to the Senate and House Appropriations Committees on IRS’ progress in dealing with those weaknesses soon. One of the managerial weaknesses discussed in our July 1995 report that has significant programmatic implications was a lack of integration of IRS’ reengineering efforts and TSM projects. Specifically, we said that IRS’ reengineering efforts were not tied to its TSM projects and that IRS lacked a comprehensive plan and schedule defining how and when to integrate these business reengineering efforts with ongoing TSM projects. The reengineering efforts we referred to in July 1995 were put on hold pending the outcome of IRS’ reassessment of TSM. As a result of the recent reassessment of TSM, IRS decided to reengineer “the tax settlement process”. IRS has defined that process as beginning at the point taxpayers collect information necessary for the filing of tax returns and ending when the current year tax account is satisfied or enforcement action is initiated. IRS has identified 18 high-level processes for this time period. One of those processes focuses on IRS’ tax return processing activity that we mentioned earlier. requirements that are not addressed by planned TSM projects or that make those projects obsolete. For example, if IRS decides that it is cost-effective to outsource paper tax return processing, it will not need the scanning and imaging technologies that DPS is being designed to provide. In closing, Mr. Chairman, our main point is that until clearly defined business requirements drive TSM projects, there is no assurance that TSM projects will achieve the desired objectives and result in improved operations. IRS must clearly define its business needs and determine the most cost-effective means for meeting those needs to ensure that it makes effective use of funds provided for information technology projects. That concludes my statement. We welcome any questions that you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Internal Revenue Service's (IRS) progress in achieving its business vision for 2001 and how its Tax Systems Modernization (TSM) supports that vision. GAO noted that: (1) as part of its business vision, IRS will increase the number of returns it receives electronically, consolidate its paper processing operations, and provide return-free filing; (2) without a returns processing strategy based on its customer service and compliance needs and a cost analysis, IRS has no assurance that its TSM investments are sound; (3) IRS plans to improve customer service by reorganizing its customer service centers according to the work performed, expanding and simplifying telephone interaction with customers, and using information systems to provide the information IRS employees need to assist customers; (4) while IRS customer service improvements appear promising, IRS must continue to handle its current workload through the conversion, train its customer service employees, and develop the necessary information systems; (5) IRS plans to improve enforcement and voluntary compliance; (6) while access to good data and more staff could improve enforcement and voluntary compliance, IRS does not plan to use its savings from TSM to hire more employees; and (7) managerial and technical weaknesses could jeopardize TSM investments. |
Ensuring the quality and safety of nursing home care has been a focus of considerable congressional attention since 1998. Titles XVIII and XIX of the Social Security Act establish minimum requirements in statute that all nursing homes must meet to participate in the Medicare and Medicaid programs, respectively. With the Omnibus Budget Reconciliation Act of 1987 (OBRA 87), Congress focused the requirements on the quality of care actually provided by a home. To help ensure that homes maintained compliance with the new requirements, OBRA 87 also established the range of available sanctions, to include CMPs, DPNAs, and termination. CMS contracts with state survey agencies to assess whether homes meet federal quality requirements through routine inspections, known as standard surveys, and complaint investigations. The requirements are intended to ensure that residents receive the care needed to protect their health and safety, such as preventing avoidable pressure sores, weight loss, and accidents. While a standard survey involves a comprehensive assessment of federal quality requirements, a complaint investigation generally focuses on a specific allegation regarding resident care or safety; complaints can be lodged by a resident, family member, or nursing home employee. Deficiencies identified during either standard surveys or complaint investigations are classified in 1 of 12 categories according to their scope (i.e., the number of residents potentially or actually affected) and severity. An A-level deficiency is the least serious and is isolated in scope, while an L-level deficiency is the most serious and is considered to be widespread in the nursing home (see table 2). When state surveyors identify and cite B-level or higher deficiencies, the home is required to prepare a plan of correction and, depending on the severity of the deficiency, surveyors conduct revisits to ensure that the home actually implemented its plan and corrected the deficiencies. Homes with deficiencies at the A, B, or C levels are considered to be in substantial compliance with federal quality requirements, while homes with D-level or higher deficiencies are considered noncompliant. A noncompliance period begins when a survey finds noncompliance and ends when the home either achieves substantial compliance by correcting the deficiencies or when the home is terminated from Medicare and Medicaid. Since 1998, the deficiencies cited during standard surveys have been summarized on CMS’s Nursing Home Compare Web site, and CMS subsequently added data on the results of complaint investigations. These data are intended to help consumers select a nursing home that takes into account the quality of care provided to residents. CMS and the states can use a variety of federal sanctions to help encourage compliance with quality requirements ranging from less severe sanctions, such as indicating the specific actions needed to address a deficiency and providing an implementation time frame, to those that can affect a home’s revenues and provide financial incentives to return to and maintain compliance (see table 3). Overall, two sanctions—CMPs and DPNAs—accounted for 80 percent of federal sanctions from fiscal years 2000 through 2005. The majority of federal sanctions implemented from fiscal years 2000 through 2005—about 54 percent—were CMPs. CMPs may be either per day or per instance. CMS regulations specify a per day CMP range from $50 to $10,000 for each day a home is noncompliant—from $50 to $3,000 for nonimmediate jeopardy and $3,050 to $10,000 for immediate jeopardy. The overall amount of the fine increases the longer a home is out of compliance. For example, a home with a per day CMP of $5,000 that is out of compliance for 10 days would accrue a total penalty of $50,000. A per day CMP can be assessed retroactively, starting from the first day of noncompliance, even if that date is prior to the date of the survey that identified the deficiency. Per instance CMPs range from $1,000 to $10,000 per episode of noncompliance. While multiple per instance CMPs can be imposed for deficiencies identified during a survey, the total amount cannot exceed $10,000. Per day and per instance CMPs cannot be imposed as a result of the same survey, but a per day CMP can be added when a deficiency is identified on a subsequent survey if a per instance CMP was the type of CMP initially imposed. Unlike other sanctions, CMPs require no notice period. However, if a home appeals the deficiency, by statute, payment of the CMP—whether received directly from the home or withheld from the home’s Medicare and Medicaid payments—is deferred until the appeal is resolved. DPNAs made up about 26 percent of federal sanctions from fiscal years 2000 through 2005. A DPNA denies a home payments for new admissions until deficiencies are corrected. In contrast to CMPs, CMS regulations require that homes be provided a notice period of at least 15 days for other sanctions, including DPNAs; the notice period is shortened to 2 days in the case of immediate jeopardy. As a result, homes can avoid DPNAs if they are able to correct deficiencies during the notice period, which provides a de facto grace period. Unlike CMPs, DPNAs cannot be imposed retroactively, and payment denial is not deferred until appeals are resolved. Although nursing homes can be terminated involuntarily from participation in Medicare and Medicaid, which can result in a home’s closure, termination is used infrequently. Terminations were less than 1 percent of total sanctions from fiscal years 2000 through 2005. Four of the seven types of sanctions described above were used less frequently than CMPs and DPNAs—directed plan of correction, state monitoring, directed in-service training, and temporary management—these sanctions accounted for about 19 percent of sanctions nationwide from 2000 through 2005. The statute permits and, in some cases, requires that DPNAs or termination be imposed for homes found out of compliance with federal quality requirements. Mandatory termination and DPNA are required, as follows: Termination—Termination is required by regulations under the statute if within 23 days of the end of a survey a home fails to correct immediate jeopardy deficiencies, or within 6 months of the end of a survey the home fails to correct nonimmediate jeopardy deficiencies. DPNA—A DPNA is required by statute if within 3 months of the end of a survey a home fails to correct deficiencies and return to compliance or when a home’s last three standard surveys reveal substandard quality of care. The statute also authorizes CMS to impose discretionary DPNAs and discretionary terminations in situations other than those specified above. Federal regulations further stipulate that such discretionary sanctions may be implemented as long as a facility is given the appropriate notice period. By regulation, the notice period for implementing both discretionary and mandatory DPNAs and terminations is 15 days; in cases of immediate jeopardy, however, the notice period is 2 days. In imposing sanctions, CMS takes into account four factors: (1) the scope and severity of the deficiency, (2) a home’s prior compliance history, (3) desired corrective action and long-term compliance, and (4) the number and severity of all the home’s deficiencies. In general, the severity of the sanction increases with the severity of the deficiency. For example, for immediate jeopardy deficiencies (J, K, and L on CMS’s scope and severity grid) the regulations require that either or both temporary management or termination be imposed, and also permits use of CMPs of from $3,050 to $10,000 per day or $1,000 to $10,000 per instance of noncompliance. Similarly, for deficiencies at the actual harm level (G, H, and I on the scope and severity grid) the regulations require one or a combination of the following sanctions: temporary management, a DPNA, a per day CMP of $50 to $3,000, or a per instance CMP of $1,000 to $10,000 per instance of noncompliance. In addition to these required sanctions, other sanctions can be included; for example, depending on the severity of the deficiency and a home’s compliance history, it could have a combination of state monitoring, a DPNA, and a CMP. Finally, CMS is required to consider the immediacy of sanctions. The statute stipulates that sanctions should be designed to minimize the time between the identification of violations and the final imposition of the sanctions. Enforcement of nursing home quality-of-care requirements is a shared federal-state responsibility. In general, sanctions are (1) initially proposed by the state survey agency based on a cited deficiency, (2) reviewed and imposed by CMS regional offices, and (3) implemented—that is, put into effect—by the same CMS regional office, usually after a required notice period (see fig. 1). CMS regional offices typically accept state-proposed sanctions but can modify them. The regional office notifies the home by letter that a sanction is being imposed—that is, its intent to implement a sanction—and the date it will be implemented. State surveyors may make follow-up visits to the home to determine whether the deficiencies have been corrected. The CMS regional office implements the sanctions if the deficiencies are not corrected. Homes may appeal the cited deficiency and, if the appeal is successful, the severity of the sanction could be reduced or the sanction could be rescinded. Homes have several avenues of appeal, including informal dispute resolution at the state survey agency level or a hearing before an administrative law judge, as well as before the Department of Health and Human Services Departmental Appeals Board. Under CMS policy, homes automatically receive a 35 percent reduction in the amount of a CMP if they waive their right to appeal before the Departmental Appeals Board. In response to our earlier recommendations, CMS undertook a number of initiatives intended to strengthen enforcement, many of which we reported on in 2005. For example, CMS (1) revised its revisits policy by requiring surveyors to return to nursing homes to verify that serious deficiencies had actually been corrected; (2) hired more staff to reduce the backlog of appeals at the Health and Human Services Departmental Appeals Boards, the entity that adjudicates nursing home appeals of deficiency citations; (3) began annual assessments of state survey activities, known as state performance reviews, which cover, among other things, the timeliness of sanction referrals from state survey agencies to CMS regional offices; and (4) revised its past noncompliance policy for citing and reporting serious deficiencies that were missed by state surveyors during earlier surveys of a home. A key CMS enforcement initiative was the two-stage implementation of an immediate sanctions policy. In the first stage, effective September 1998, CMS required states to refer for immediate sanction homes found to have a pattern of harming or exposing residents to actual harm or potential death or serious injury (H-level or higher deficiencies on the agency’s scope and severity grid) on successive surveys. Effective January 2000, CMS expanded the policy, requiring referral of homes found to have harmed one or a small number of residents (G-level deficiencies) on successive routine surveys or intervening complaint investigations. After expansion of the immediate sanctions policy to include G-level deficiencies, it became known as the double G immediate sanctions policy. CMS also took steps to improve its ability to manage and oversee the enforcement process. Our 1999 report described how CMS regions and states were using their own systems to track sanctions rather than CMS’s OSCAR database. Regional office systems ranged from manual, paper- based records to complex computer programs; none of the four states included in our 1999 report had tracking systems compatible with OSCAR or the regional office systems in use. Until it implemented a new enforcement data collection system, CMS used LTC, an interim enforcement tracking system developed and first used by its Chicago regional office. LTC was operational in all 10 regions by January 2000. CMS’s enforcement data collection system—AEM—replaced LTC and was implemented 4 years later, on October 4, 2004. Recognizing the need to focus more attention on homes that historically provided poor care, CMS designed and launched a Special Focus Facility program in January 1999, instructing states to select 2 homes each for enhanced monitoring. Surveys were to be conducted at 6-month intervals rather than annually. In September 2000, CMS reported that semiannual surveys had been conducted at a little more than half of the original 110 facilities. In late 2004, CMS modified the program by (1) expanding its scope to include more homes, (2) revising the selection criteria for homes, and (3) strengthening sanctions for homes that did not improve within 18 months. In a relevant but unrelated initiative, CMS established a voluntary program to help nursing homes improve the quality of care provided to residents. In 2002, Medicare Quality Improvement Organizations (QIO) began working intensively on issues such as preventing pressure sores and pain management with 10 percent to 15 percent of nursing homes in each state. Responding to concerns that QIOs were not working with homes that needed the most help, CMS established a separate pilot program in 2004; QIOs worked for 12 months with 1 to 5 nursing homes with significant quality problems in 18 states to help them redesign their clinical practices. Unlike the Special Focus Facility program, the participation of homes in the pilot was voluntary. To distinguish it from the Special Focus Facility program, the pilot was known as the Collaborative Focus Facility program. Among the homes we reviewed in four states, the number of implemented sanctions and serious deficiencies declined across two time periods— fiscal years 2000 through 2002 and fiscal years 2003 through 2005. Federal data show similar declines for homes nationwide, a trend consistent with the decline in the proportion of homes cited for serious deficiencies that generally result in sanctions. Despite the decline in the number of serious deficiencies, the homes we reviewed generally were cited for more deficiencies that caused harm to residents than other homes in the four states. While the numbers of implemented CMPs and DPNAs at the homes we reviewed declined across the two time periods, the amount of CMPs paid increased. Not all imposed sanctions for these homes were implemented, however, which may reduce the deterrent effect of sanctions; in fact, we found that the implementation rate of certain sanctions, such as DPNAs, decreased. The deterrent effect of sanctions for the homes was further eroded because CMS generally imposed CMPs on the lower end of the allowable dollar range and did not exercise its authority to use discretionary DPNAs and terminations, allowing the homes more opportunities to escape sanctions prior to implementation. Among all nursing homes nationwide, sanctions declined across the two time periods—fiscal years 2000 through 2002 and fiscal years 2003 through 2005. Implemented terminations declined the most across the two time periods (about 41 percent) and CMPs declined the least (about 12 percent), while the number of DPNAs declined by about 31 percent. In the same time periods, the average number of serious deficiencies per home declined by about 33 percent nationwide, from about 0.8 to about 0.5. These downward trends are also consistent with the nationwide decline in the proportion of homes with serious deficiencies—from about 28 percent in fiscal year 2000 to about 17 percent in fiscal year 2005 (see app. II). While the reported decline in serious deficiencies and the proportion of homes cited for such deficiencies may be due to improved quality, our earlier reports noted similar declines that masked (1) understatement of serious quality problems, and (2) inconsistency in how states conduct surveys. For example, our current analysis found that the proportion of homes cited for serious deficiencies ranged from a low of about 4 percent in Florida to a high of about 44 percent in Connecticut during fiscal year 2005. Across the four states we reviewed, the proportion of homes with serious deficiencies in fiscal year 2005 ranged from 8 percent in California to 23 percent in Michigan. As we previously reported, such disparities are more likely to reflect inconsistency in how states conduct surveys rather than actual differences in the quality of care provided by homes. In addition, in commenting on a draft of this report, CMS noted concerns about whether the immediate sanctions policy has had a negative effect on state citations of serious deficiencies. The number of implemented sanctions at the homes we reviewed as well as the number of serious deficiencies cited in these homes declined across two time periods—fiscal years 2000 through 2002 and fiscal years 2003 through 2005—consistent with nationwide trends. Deficiency trends. The average number of serious deficiencies per home we reviewed decreased from about 1.8 in fiscal years 2000 through 2002 to about 0.7 in fiscal years 2003 through 2005, about a 61 percent decline; this decline was consistent with the national trend. During both time periods, however, the homes we reviewed generally performed more poorly than other homes in their states, having, on average, more G-level or higher deficiencies and more double Gs. For example, the Texas homes we reviewed had on average 1.3 times as many G-level or higher deficiencies as all other homes in the state and the California homes we reviewed had on average 3 times as many as all other California nursing homes. CMP trends. Due in part to the closure of some poorly performing homes and the citation of fewer serious deficiencies, the homes we reviewed had fewer CMPs in fiscal years 2003 through 2005 than in the prior 3 fiscal years, but the amount paid was higher (see table 4). Among the homes, the number of implemented CMPs declined by about 42 percent from the first to the second time period. Although the number of CMPs among the homes we reviewed decreased, the amount of CMPs paid in Michigan more than doubled between the two time periods, accounting for much of the increase in the amount of CMPs paid across the two time periods (see app. III). States’ preferences for either state or federal CMPs may in part affect their use. In Michigan, state officials are more likely to use federal CMPs and implement them in greater amounts than other states we reviewed. In contrast, the homes we reviewed in Pennsylvania had only one implemented CMP and paid no federal CMPs from fiscal years 2003 through 2005; however, during the same period, the Pennsylvania state survey agency implemented seven state CMPs and collected $12,050. A Pennsylvania state survey agency official said that the state prefers to use state sanctions because they can be implemented more quickly and are believed to be more effective than federal sanctions. The Texas state survey agency does not recommend more than one type of money penalty for the same deficiency and chooses among one of two state money penalties or a federal CMP. DPNA trends. The number of DPNAs declined by 42 percent from fiscal years 2000 through 2002 to fiscal years 2003 through 2005 for the homes we reviewed. Overall, the duration of the DPNAs decreased by 12 percent from the first to the second time period. The duration of DPNAs among the Texas homes we reviewed decreased the most—from an average of 46 days in the first time period to an average of 26 days in the second time period. The duration of DPNAs among the Michigan and Pennsylvania homes also decreased (see app. III). In California, however, the DPNAs were in effect longer in the second time period—from an average of 39 days in fiscal years 2000 through 2002 to an average of 63 days in fiscal years 2003 through 2005. As a result, homes in California were out of compliance for longer periods of time. Termination trends. Only two of the homes we reviewed closed involuntarily—that is, they were terminated for cause by CMS because of health and safety issues. One of the two homes has since been certified to participate in Medicare again. An additional nine other homes closed voluntarily, although four reopened at some point during fiscal years 2000 through 2005. However, a home’s voluntary closure may not accurately reflect the degree to which the home had quality problems, such as a history of harming residents, that put the home at risk of involuntary termination. The reasons for closure, as recorded by CMS, are general and do not always reflect that homes may have histories of harming residents and may have been at risk of involuntary termination. The implementation rate of DPNAs and terminations declined for the homes we reviewed, while the implementation rate of CMPs increased across three time periods (see fig. 2). Some sanctions are never implemented because CMS rescinds them if homes correct deficiencies before the implementation date, a situation we noted in our 1999 report. Thus, sanctions may be considered more of a threat than a real consequence of noncompliance. We compared the implementation rates of CMPs, DPNAs, and terminations across three time periods: (1) July 1995 to October 1998, the time period covered in our March 1999 report; (2) fiscal years 2000 through 2002; and (3) fiscal years 2003 through 2005. From the first time period to the third, the implementation rate for DPNAs declined by about 20 percent and the implementation rate for terminations declined by about 97 percent. In contrast, across the same time periods, the overall implementation rate for CMPs increased from 32 percent in the first time period to 86 percent in the third time period, an almost threefold increase. The timing of this increase coincides with the January 2000 implementation of the immediate sanctions policy, suggesting that the increase may in part be related to the policy’s implementation. Among the homes we reviewed, CMS did not use the full range of its sanctions authority, generally imposing CMPs on the lower end of the allowable range. In addition, CMS imposes DPNAs and involuntary terminations when they are mandatory, but generally not when they are discretionary. Homes subject to such mandatory sanctions have more opportunities to escape sanctions prior to implementation. The median per instance CMP implemented was $2,000 in fiscal years 2000 through 2002 and $1,750 in fiscal years 2003 through 2005, although the maximum per instance CMP can be as high as $10,000. The median per day CMP implemented for nonimmediate jeopardy deficiencies was $500 in fiscal years 2000 through 2002 and $350 in fiscal years 2003 through 2005, significantly below the maximum of $3,000 per day. In cases in which homes were cited for immediate jeopardy and the maximum potential per day CMP is $10,000, the median per day CMP implemented was $3,050 in fiscal years 2000 through 2002 and $5,050 in fiscal years 2003 through 2005. According to one CMS official, the agency generally hesitates to impose CMPs that are higher than $200 per day, in part because of concerns that higher per day CMPs could bankrupt some homes. But the same official noted that the CMPs being imposed are not enough to “make nursing homes take notice” or to deter them from deficient practices. Another CMS official stated that some homes consider CMPs a part of the “cost of doing business” or as having no more effect than a “slap on the wrist.” Table 5 provides examples of homes we reviewed with implemented CMPs that were at the low end of the allowable CMP range. CMS is likely to impose DPNAs and terminations only when required to do so. However, CMS also has broad authority to impose DPNAs and terminations at its discretion, which can facilitate quicker implementation. Discretionary DPNAs and terminations can be implemented any time after a survey if the sanction is appropriate for the cited deficiencies and the required notice period is met. In contrast, the soonest that mandatory DPNAs and terminations for nonimmediate jeopardy can be implemented is 3 and 6 months, respectively, after the survey on which the deficiencies were cited. Despite the greater expediency of discretionary DPNAs, 64 percent of the DPNAs CMS imposed were mandatory for fiscal years 2000 through 2005 for the homes we reviewed. For example, CMS imposed a total of six DPNAs during fiscal years 2000 through 2003 on a Pennsylvania home with demonstrated compliance problems. Of those six DPNAs, the first five were mandatory DPNAs. Only the last DPNA— imposed after multiple years of repeated noncompliance at the G-level or higher—was a discretionary DPNA. Moreover, CMS imposed significantly more mandatory terminations than discretionary terminations; in fiscal years 2000 through 2005, 118 mandatory and 5 discretionary terminations were imposed on the homes we reviewed. None of the mandatory terminations were implemented, but 2 discretionary terminations were implemented—one each in Michigan and Texas. An official from the Texas state survey agency said that the CMS regional office in Dallas prefers to impose mandatory terminations, unless there is cause to believe there will be no improvements in the care provided by the nursing home. Mandatory terminations give homes 6 months to correct deficiencies before being implemented, as opposed to discretionary terminations, which can be implemented more quickly. Even when CMS imposes terminations, their deterrent effect is weakened because the agency sometimes extends the termination dates. For example, CMS extended the discretionary termination dates for up to 6 months for some of the Texas homes we reviewed if the nursing homes had lower-level deficiencies on subsequent surveys. The termination date imposed on one Texas nursing home we reviewed was extended three times in fiscal year 2001 from the original date of April 18 to June 26, then to July 26, and finally to September 26. The first extension occurred because the home corrected the deficiencies that caused immediate jeopardy cited during the first survey. Therefore, despite the fact that this home continued to be found out of compliance for deficiencies such as mistreatment or neglect of residents during subsequent surveys, CMS extended the termination date twice to give the home an additional opportunity to correct those deficiencies and achieve substantial compliance. The termination ultimately was rescinded because the home corrected the deficiencies, but the home was subsequently cited for eight G-level deficiencies such as inadequate treatment or prevention of pressure sores, employing convicted abusers, and poor accident supervision or prevention. In 2004, the home closed voluntarily. Despite changes in federal enforcement policy, almost half of the homes we reviewed—homes with prior serious quality problems—continued to cycle in and out of compliance, continuing to harm residents. These homes corrected deficiencies only temporarily and, despite having sanctions implemented, were again found to be out of compliance during subsequent surveys. Our analysis also showed that in some cases the double Gs did not result in immediate sanctions as required, even though about 40 percent of the homes were cited for double Gs during fiscal years 2000 through 2005. In addition, the term “immediate sanctions policy” is misleading because the policy requires only that sanctions be imposed, that is, that homes be notified immediately of CMS’s intent to implement sanctions, not that sanctions must be implemented immediately. Furthermore, when a sanction is implemented for a double G citation, there is a lag time between when the double G occurs and the sanction’s effective date. CMS cited double Gs multiple times at several of the homes we reviewed, suggesting that immediate sanctions did not deter future noncompliance as intended. Terminations of homes is infrequent, in part because of concerns such as local access to other nursing facilities and the effect on residents if they are moved, and in part because CMS allows some problem homes to continue operating until the homes eventually close voluntarily. Consistent with our earlier work, our current analysis showed that sanctions appear to have induced homes to correct deficiencies only temporarily because surveyors found that many of the homes we reviewed with implemented sanctions were again out of compliance on subsequent surveys. Commenting on this phenomenon, state survey agency officials said that improvements resulting from sanctions might last about 6 months. From fiscal years 2000 through 2005, 31 of the 63 homes we reviewed (about 49 percent) cycled in and out of compliance more than once, harming residents, even after sanctions had been implemented, including 8 homes that did so seven times or more (see fig. 3). Each of the 31 homes that cycled in and out of compliance more than once during the period we reviewed had at least one G-level or higher deficiency in at least one period of noncompliance; 19 had at least one G-level or higher deficiency in every noncompliance period. Table 6 shows the number and length of noncompliance periods for a Michigan home we reviewed that cycled in and out of compliance nine times from fiscal years 2000 through 2005; the home remained open as of November 2006. Appendix IV provides similar examples for homes in California, Pennsylvania, and Texas. Homes’ correction of deficiencies often was temporary, despite receiving sanctions. Thus, once the homes we reviewed corrected deficiencies, they maintained compliance for a median of 133 days and then cycled out of compliance again. Some homes cycled out of compliance more quickly—homes were again out of compliance in 30 days or less about 8 percent of the time and within 60 days about 28 percent of the time. Despite the large number of G-level or higher deficiencies cited for the homes we reviewed, relatively few of these homes were cited for double Gs, and some double G citations did not result in sanctions. Over the 6-year period, 27 of the homes we reviewed had 69 double Gs. However, 47 of the homes had 444 G-level or higher deficiencies. We found no record that CMS imposed a sanction for 15 of the 69 double Gs, but the data did show that CMS implemented sanctions for the remaining double G cases. Across the four states we reviewed, there was variation in the citation of G-level or higher deficiencies and the implementation of immediate sanctions. For example, from fiscal years 2000 through 2005, 35 percent of G-level or higher deficiencies and 52 percent of double Gs among the homes we reviewed were cited in Michigan, while 9 percent of the G-level or higher deficiencies and 4 percent of the double Gs were cited in homes in California. In California, complaints typically are investigated under state licensure authority and the findings generally are not recorded in the same manner as deficiencies cited under the federal process, which may contribute to lower double G citation rates in the state. Thus, California homes are not cited for a double G when the subsequent deficiency equivalent to a G-level or higher deficiency was found during a complaint investigation. Complaint surveys with G-level or higher deficiencies often lead to double Gs. One CMS official stated that if complaints against California nursing homes were investigated under the federal complaint investigation procedure, more double Gs would be cited in California. The California Department of Health Services conducted a pilot to test the use of the federal complaint procedure in select district offices, in part because of the low double G citation rate. As of November 2006, the department decided not to expand or complete a formal evaluation of the pilot; instead, the department is focusing on eliminating its backlog of complaints and initiating complaint investigations within required time frames. Although referred to as the “immediate sanctions” policy, the term is misleading because (1) there is a lag between when the double G is cited and when the sanction is implemented, negating the sanction’s immediacy; (2) the policy only requires that sanctions be imposed immediately, which does not guarantee that the sanction will be implemented; and (3) homes may not actually pay a CMP, the most frequently implemented sanction, until years after citation of the double G because payment is suspended until after appeals have been adjudicated. Delays in implementing DPNAs and in collecting CMPs—which diminish their immediacy—coupled with their nominal amounts may undermine their deterrent effect. Immediate sanctions often are not immediate because there is a lag time between the identification of deficiencies during the survey and when a sanction (i.e., a CMP or DPNA) is actually implemented. CMS implemented about 68 percent of the DPNAs for double Gs among the homes we reviewed during fiscal years 2000 through 2005 more than 30 days after the survey (see app. V). In contrast, CMPs can go into effect as early as the first day the home was out of compliance, even if that date is prior to the survey date, because, unlike DPNAs, CMPs do not require a notice period. About 98 percent of CMPs imposed for double Gs took effect on or before the survey date. Figure 4 illustrates the lag time that can occur between the survey date and the implementation date of the sanction, especially with regard to DPNAs. For example, in fiscal years 2000 through 2005, 60 percent of the DPNAs in the homes we reviewed were implemented 31 to 60 days from the date of the survey citing deficiencies. In contrast, nearly all CMPs were implemented on or before the survey date. While the immediate sanctions policy requires that sanctions be imposed immediately, it is silent on how quickly sanctions should be implemented. A sanction is considered imposed when a home is notified of CMS’s intent to implement a sanction—15 days from the date of the notice. If during the 15-day notice period the nursing home corrects the deficiencies, no sanction is implemented. Thus, even under the immediate sanctions policy, which is intended to eliminate grace periods for nursing homes repeatedly cited for deficiencies at the actual harm level or higher, nursing homes have a de facto grace period. While CMPs can be implemented closer to the date of survey than DPNAs, the immediacy and the effect of CMPs may be diminished by (1) the significant time that can pass between the citation of deficiencies on a survey and the home’s payment of the CMP and (2) the low amounts imposed, as described earlier in this report. By statute, payment of CMPs is delayed until appeals are exhausted. For example, a Michigan home did not pay its CMP of $21,600 until more than 2 years after a February 2003 survey had cited a G-level deficiency. (See fig. 5.) The February G-level citation was a repeat deficiency: less than a month earlier, the home had received another G-level deficiency in the same quality of care area. The delay in collecting the fine in this case is consistent with a 2005 report from the Office of Inspector General of the Department of Health and Human Services that found that the collection of CMPs in appealed cases takes an average of 420 days—a 110 percent increase in time over nonappealed cases—and “consequently, nursing homes are insulated from the repercussions of enforcement by well over a year.” Unlike the Social Security Act, the federal Surface Mining Control and Reclamation Act of 1977 provides for the collection of CMPs prior to exhaustion of administrative appeals. Under this statute, mining operators charged with civil money penalties have 30 days to either pay the penalty in full or forward the proposed amount for placement in an escrow account pending resolution of appeals. This provision, requiring escrow deposit of a proposed penalty assessment, has been upheld by three federal circuit courts of appeal, all citing the various procedural safeguards as helping to ensure sufficient due process to affected operators. For example, these courts cited the availability of an informal conference at which mining operators may present information relevant to an assessment of a penalty. It is unclear whether the informal dispute resolution process available to nursing homes would provide due process similar to that provided under the Federal Mining statute. Nonetheless, the Social Security Act would preclude a more expeditious collection of nursing home CMPs. Despite the potentially negative consequences, CMS’s implementation of the immediate sanctions policy does not appear to deter homes from harming residents in the future. Two-thirds (18) of the 27 nursing homes cited for double Gs that subsequently had sanctions implemented went on to be cited again for one or more additional double Gs. (See fig. 6.) Nursing homes, even those that repeatedly harm residents, are infrequently terminated because of CMS’s concerns about access to other sources of nursing care and the impact of moving residents. Of the homes we reviewed, two were terminated involuntarily for cause. Another nine homes closed voluntarily, which is not a sanction because the homes chose to close. However, the actual reason for closure is not always clear; a home may close to avoid involuntary termination because of quality problems cited by state surveyors. Allowing a problem home to close voluntarily rather than terminating it may result in continuing harm to residents until the home decides to close. For example, two homes we reviewed in Pennsylvania and Texas closed voluntarily, but the histories of both homes show that they were repeatedly cited for harming residents from fiscal year 2000 through the time of their closures, over 4 years later in January 2004. The Pennsylvania home cycled in and out of compliance 4 times during the period we reviewed and had noncompliance periods lasting an average of 170 days. The Texas home cycled in and out of compliance 10 times during the period reviewed and had average noncompliance periods of 46 days. On average, both homes had about 6 G-level or higher deficiencies per year in areas such as inadequate treatment or prevention of pressure sores and resident abuse. The home in Pennsylvania had an average of 31 other deficiencies per year and the Texas home had an average of 27. Four homes we reviewed had similar deficiency histories. Two closed voluntarily and two remained open as of November 2006 (see table 7). Although the homes that remained open met the deadline to correct deficiencies before the termination would have been implemented, a home’s ability to correct deficiencies in a specified period of time may not be the strongest criteria upon which to determine whether a home should remain open, because correcting deficiencies does not ensure that the home will improve residents’ quality of care and does not prevent the home from again falling out of compliance. For example, the California and Michigan homes in table 7 were still operating as of November 2006 but cycled in and out of compliance four and seven times, respectively. According to CMS and state officials, factors that may prevent or delay termination of problem nursing homes include (1) concerns regarding lack of access to alternate local nursing facilities, (2) the potential for resident trauma as a result of transfer to another home, (3) the preference of residents’ families for homes located close by, and (4) pressure to keep homes open from families and other stakeholders. Our analysis of alternatives to the 4 poorly performing homes in table 7—those that closed voluntarily or are still open—showed that there were from 2 to 37 homes within 10 miles of these homes, and from 5 to 120 homes within 25 miles. While the goal of enforcement is to help ensure nursing home compliance with federal quality requirements, CMS management of the process is hampered by the complexity of its immediate sanctions policy and by its fragmented and incomplete data systems. The agency’s immediate sanctions policy, intended to deter repeat noncompliance, fails to hold some homes accountable for repeatedly harming residents. In addition, although CMS has developed a new data system, the system’s components are not integrated and the national reporting capabilities are not complete, hampering the agency’s ability to track and monitor enforcement. Finally, CMS has taken some steps intended to improve enforcement of nursing home quality requirements, such as developing guidance to help ensure greater consistency across states in CMP amounts, revising its Special Focus Facility program, and commissioning two studies to examine the effectiveness of nursing home enforcement. It is not clear, however, the extent to which—or when—these initiatives will address the enforcement weaknesses we found. The double G immediate sanctions policy is complex and fails to hold some homes accountable. In 2003, we reported that the early implementation of the policy was flawed. We found that between January 2000 and March 2002 over 700 cases that should have been referred for immediate sanctions were not because (1) the policy was misunderstood by some states and regional offices, (2) states lacked adequate systems for identifying deficiencies that triggered an immediate sanction, and (3) actions of two of the four states were at variance with CMS policy. CMS developed an on-line reporting tool for use by survey agency and regional office staff to automate the identification of double Gs. CMS also offered training sessions and issued additional guidance to state survey agencies and regional offices. While the on-line reporting tool and training were useful, they did not address the underlying complexity of the policy. For example, CMS staff told us that in developing the tool they had initially misinterpreted the double G immediate sanctions policy. As a result, the tool produced many false positives: that is, it identified deficiencies as triggering an immediate sanction that in fact did not occur. Moreover, a December 2005 report by the Office of the Inspector General of the Department of Health and Human Services also reported that state survey agency staff continued to have difficulty identifying double G cases. Furthermore, our analysis of CMS’s application of the policy to the homes we reviewed demonstrated that the policy’s complex rules allowed homes to escape immediate sanctions even if they repeatedly harmed residents; these rules include (1) the requirement for an intervening period of compliance, (2) the clearing effect of standard surveys, and (3) the lack of differentiation between single and multiple instances of harm. Such rules may in part explain why the homes we reviewed only had 69 instances of immediate sanctions over a 6-year period, despite being cited 444 times for deficiencies that harmed residents. Intervening period of compliance. G-level or higher deficiencies only count toward a double G immediate sanction if the home has an intervening period of compliance between the two G-level or higher deficiencies. In order to receive an immediate sanction, a home has to achieve substantial compliance between the pair of surveys on which the G-level or higher deficiencies are cited. As a result of this rule, homes that do not correct deficiencies do not receive immediate sanctions, while homes that do correct deficiencies do receive immediate sanctions. CMS officials stated that the intent of the policy as written was to give nursing homes a chance to correct deficiencies and achieve a period of compliance. Without this provision, CMS officials believe that homes could get caught in endless double G cycles. The following example illustrates how the policy allows nursing homes to escape immediate sanctions if they do not correct deficiencies and have ongoing noncompliance periods. In a 9-month time period, a Pennsylvania home had seven surveys, each with at least one G-level deficiency (a total of 19 G-level deficiencies). However, double G immediate sanctions were triggered by only two pairs of surveys because the home had failed to correct some deficiencies before the next survey that again found actual harm. Figure 7 illustrates how some pairs of surveys with G-level deficiencies do not count as a double G because of the intervening period of compliance rule. For example, both the March and April surveys cited G-level deficiencies. However, the pair of surveys did not result in a double G, which would have triggered immediate sanctions because the home did not correct the G-level deficiency cited on the March survey before the next G-level deficiency was cited in April. Following the April survey, the home corrected the deficiencies, resulting in a period of compliance. In July, another survey found a new G-level deficiency. Because of the intervening period of compliance, the March and July surveys resulted in a double G, for which immediate sanctions would have been warranted. Clearing effect of standard surveys. Under the double G immediate sanctions policy, a standard survey without a G-level or higher deficiency “clears the home’s record” for the purposes of determining whether a double G occurred. As a result of this rule, surveys with G-level or higher deficiencies that occurred before the standard survey without a G-level or higher deficiency are not considered in determining whether a double G should be cited and an immediate sanction should be imposed. CMS officials believe that it is appropriate for standard surveys without G-level or higher deficiencies to clear the home’s record for double G purposes because standard surveys are comprehensive and occur regularly. Yet, we have previously reported that weaknesses in the survey process result in surveyors’ missing serious deficiencies on standard surveys. Moreover, variability among states in the citation of serious deficiencies suggests that some states may not be citing deficiencies at the appropriate scope and severity (see app. II). For example, according to California officials, the guidance the state received from the CMS regional office created confusion as to what constituted actual harm, and this confusion contributed to the decline in citations of serious deficiencies in California. The regional office clarified its guidance in late 2004. The following example illustrates how a standard survey without G-level or higher deficiencies affects double G determinations and how having uncorrected deficiencies can prevent a home from receiving an immediate sanction. In approximately a 12-month period, a Michigan home had five surveys, four of which had one G-level deficiency. However, the G-level deficiencies triggered double G immediate sanctions only once instead of three times because in one instance a standard survey cited no G-level deficiencies and in the other there was no intervening period of compliance. Figure 8 illustrates how some pairs of surveys with G-level deficiencies do not count as double Gs because of the clearing effect of standard surveys. For example, state surveyors found a G-level deficiency during a January 2000 complaint survey. However, on the home’s standard survey a month later (February 2000), no G-level or higher deficiencies were found by surveyors. As a result, when surveyors found another G-level deficiency on a complaint survey several months later (November 2000), the G-level deficiency on the home’s January survey was not considered, and no immediate sanctions were triggered. The pair of surveys in January 2000 and November 2000 did not trigger immediate sanctions because, in effect, the February 2000 standard survey cleared the home’s record. Multiple instances of harm. Multiple G-level or higher deficiencies identified on a survey that results in an immediate sanction are sometimes treated the same, in terms of enforcement, as a single instance of harm or immediate jeopardy cited on a survey. We examined the sanctions imposed for a single versus multiple instances of harm and found that the sanctions can be quite similar, despite the significant differences in the number of deficiencies. The following example involves two surveys of a Michigan home with a history of repeated noncompliance. On a survey with only 1 G-level deficiency, CMS implemented a $350 per day CMP and a discretionary DPNA. On a different survey with 33 D-level or higher deficiencies and 6 G-level or higher deficiencies, CMS implemented a $200 per day CMP and a discretionary DPNA. We found similar examples among other homes we reviewed. We discussed our concerns with CMS about how the double G immediate sanctions policy allows some homes to avoid immediate sanctions. CMS officials stated that regardless of the policy, state and regional office officials retain the discretion to impose immediate sanctions even when not required by the policy. However, based on a discussion with CMS officials, we believe that, instead of imposing sanctions of appropriate severity, state and regional office officials may impose weaker sanctions for problem homes that have escaped immediate sanctions because of the complexities of the policy. CMS agreed that this could happen. Fragmented data systems and incomplete national reporting capabilities continue to hamper CMS’s ability to track and monitor enforcement. In March 1999, we reported that CMS lacked a system for effectively integrating enforcement data nationwide and that the lack of such a system weakened oversight. Since 1999, CMS has made progress in developing an enforcement data collection system called the ASPEN Enforcement Manager (AEM). However, while AEM collects valuable data from the states and regions, it is not fully integrated with other CMS systems used to track nursing home survey and enforcement activities. For example, when regional and state survey officials want to evaluate complaint and enforcement data, they must access one system for complaint data and then access another system, AEM, for enforcement data. Because there is no direct interface between the two systems, CMS and states must rely on fragmented data systems for tracking and monitoring enforcement. Furthermore, CMS officials told us that the agency does not have a concrete plan to use the enforcement data to improve monitoring and oversight but that some national enforcement reports are under development. From 2000 to 2004, CMS tracked sanctions with LTC, a data system developed in the Chicago region that became operational in all 10 CMS regions in 2000. LTC was a relatively simple system designed to collect sanctions data, automatically generate sanction imposition letters, and automatically calculate the 35 percent reduction in CMPs for homes that waive the right to appeal deficiencies. LTC was not always useful for enforcement oversight because it was sometimes incomplete. Data entry into the LTC system was optional, and many regional and state surveyors continued to rely on their own, state-specific tracking systems. Moreover, during the time LTC was in use, states and regions were expected to continue updating the enforcement component of OSCAR, which duplicated some of the information in LTC. This required separate manual data entry into both LTC and OSCAR. We were told by regional office officials that sometimes only one of the files would be updated. Furthermore, LTC had no internal quality control checks for ensuring all fields were completed or that the data were accurate; in its design of LTC, CMS chose flexibility in modifying the data to accommodate special circumstances over a more rigid field edits system that would have controlled the data more tightly. Since October 1, 2004, CMS has used AEM to collect state and regional data on sanctions and improve communications between state survey agencies and CMS regional offices. Specifically, AEM was designed to provide real-time entry and tracking of sanctions, issue monitoring alerts, generate enforcement letters, and facilitate analysis of enforcement patterns. CMS expects that the data collected in AEM will enable states, CMS regional offices, and the CMS central office to more easily track and evaluate sanctions against nursing homes as well as respond to emerging issues. Developed by CMS’s central office primarily for use by states and regions, AEM is one module of a broader data collection system called ASPEN. There are a number of other modules under the ASPEN umbrella, including the ASPEN Complaints/Incidents Tracking System (ACTS) module. The ASPEN modules—and other data systems related to enforcement such as the financial management system for tracking CMP collections—are fragmented and lack automated interfaces with each other. As a result, enforcement officials must pull discrete bits of data from the various systems and manually combine the data to develop a full enforcement picture. For example, if regional office officials want to review a home’s complaint history, they must access ACTS to print a report on complaints, access AEM to print a report on corresponding sanctions, manually compare the two reports, and then access the CMP tracking system to determine whether a corresponding CMP was paid. Each step adds to staff workload. AEM collects potentially useful enforcement data from the states and regions, but, as described, CMS has not integrated AEM with the other data collection systems (e.g., ACTS); furthermore, the agency has not defined a plan for using the AEM data to inform the tracking and monitoring of enforcement through national enforcement reports. In a December 2004 CMS report, the agency stated that AEM “will permit meaningful comparisons of like measures and will serve as a primary tool on which to base policy decisions, new initiatives and strategies for improving care to our Nation’s nursing home population.” While CMS is developing a few draft national enforcement reports, it has not developed a concrete plan and timeline for producing a full set of reports that use the AEM data to help in assessing the effectiveness of sanctions and its enforcement policies. In addition, while the full complement of enforcement data recorded by the states and regional offices in AEM is now being uploaded to CMS’s national system, CMS does not intend to upload any historical data. Efforts to track and monitor enforcement would be greatly enhanced by reports that contain the historical data; for example, with historical data the agency could generate reports that provide a longitudinal perspective of a home’s compliance history, compare trends across states and regions, and, overall, help evaluate the effectiveness of sanctions and policies. Finally, like LTC, AEM has quality control weaknesses. While AEM has some automatic quality control mechanisms to ensure that the data entered are complete and in a valid format, there are no systematic quality control mechanisms to ensure that the data entered are accurate. For example, while the system automatically requires the entry of valid survey dates, CMS does not conduct periodic data audits to check that the survey dates are correct. CMS officials told us they will continue to develop and implement enhancements to AEM to expand its capabilities over the next several years. However, until CMS develops a plan for integrating the fragmented systems and for using AEM data—along with other data the agency collects—efficient and effective tracking and monitoring of enforcement will continue to be hampered and, as a result, CMS will have difficulty assessing the effectiveness of sanctions and its enforcement policies. In addition to its efforts to implement a new data system for managing enforcement, CMS has taken other steps to improve its enforcement of nursing home quality requirements. For example, the agency has developed guidance to help ensure greater consistency across states in CMP amounts imposed, revised its Special Focus Facility program, and commissioned two studies to examine the effectiveness of nursing home enforcement. To ensure greater consistency in CMP amounts proposed by states and imposed by regions, CMS, in conjunction with state survey agencies, developed a grid that provides guidance for states and regions. The CMP grid lists ranges for minimum CMP amounts while allowing for flexibility to adjust the penalties on the basis of factors such as the deficiency’s scope and severity, the care areas where the deficiency was cited, and a home’s past history of noncompliance. In August 2006, CMS completed the regional office pilot of its CMP grid. The results of the pilot, which are currently being analyzed, will be used to determine how the grid should be used by states; its use would be optional to provide states flexibility to tailor sanctions to specific circumstances. CMS revised its Special Focus Facility program, an initiative intended to increase the oversight of homes with a history of providing poor care. We had previously reported that the program was worthwhile but that its narrow scope excluded many homes that provide poor care. Moreover, according to CMS, the goal of two surveys per home per year was never achieved because of the relatively low priority assigned to the program and the lack of state survey agency resources. In December 2004, CMS announced three changes in the operation of the program. First, CMS expanded the scope of the program from about 100 homes nationwide to about 135 homes by making the number of Special Focus Facilities in each state proportional to the number of nursing homes. Second, CMS revised the method for selecting nursing homes by reviewing 3 years’ rather than 1 year’s worth of deficiency data. This change was intended to ensure that the homes in the program had a history of noncompliance rather than a single episode of noncompliance. Third, CMS strengthened its enforcement for Special Focus Facilities by requiring immediate sanctions for homes that failed to significantly improve their performance from one survey to the next and by requiring termination for homes with no significant improvement after three surveys over an 18-month period. Despite these changes, however, many homes that could benefit from enhanced oversight and enforcement are still excluded from the program. As noted earlier, few of the homes we reviewed were or are part of CMS’s Special Focus Facilities program. In 2005, only 2 were designated Special Focus Facilities and in 2006, the number increased to 4. Of the 8 homes that cycled in and out of compliance seven or more times (see fig. 3), 6 are still open but only 1 is now a Special Focus Facility. Although CMS now requires QIOs to work with poorly performing nursing homes, this initiative also only targets a small number of homes—as few as 1 to 3 facilities in each state. To enhance its understanding of and ability to improve the enforcement process, CMS has funded two studies that will examine the steps that lead to sanctions as well as the impact of enforcement on homes’ quality-of- care processes. Qualitative Enforcement Case Studies. This study, which began in the spring of 2003 and is scheduled to be completed in early 2007, required research nurses to visit 25 nursing homes in four states to evaluate how the survey and enforcement processes are carried out and assess the extent to which the enforcement process results in changes in nursing staff behavior and improved compliance with federal requirements. Impact of Sanctions on Quality. The objective of this study is to test the effects of sanctions on facility behavior and resident outcomes. Researchers will identify and compare a group of nursing homes that had both deficiencies and sanctions to a group of nursing homes that had similar levels of deficiencies but no sanctions. A year later, researchers will review the nursing home’s subsequent survey to determine whether the sanctions resulted in any significant changes in the quality of care delivered. The study began in the fall of 2004 and the first report is scheduled to be completed by mid-2007. Although CMS has taken several steps to improve its enforcement of nursing home requirements, its Nursing Home Compare Web site does not include information on sanctions. Thus, CMS does not indicate what sanctions have been implemented against nursing homes, nor does it identify homes that have received immediate sanctions for repeatedly harming residents. As noted throughout this report, we found variation among the states we reviewed in areas such as the number and amount of CMPs implemented and the proportion of homes with double Gs. In general, these differences reflect the state survey agencies’ views on the effectiveness of certain sanctions and differences in state enforcement policies. For example, Pennsylvania state officials prefer state rather than federal sanctions because they believe the former are more effective, have a greater deterrent effect on providers, and are easier and quicker to impose. Pennsylvania requires homes to pay a state CMP prior to appeal, even if the home appeals the deficiency. In contrast, homes need not pay a federal CMP until after an appeal is resolved. Pennsylvania rarely implemented federal CMPs on the 14 state homes whose compliance history we reviewed, preferring to use state sanctions instead. In Michigan, state officials are more likely to use federal CMPs and implement them in greater amounts than other states we reviewed. Texas state officials often use state rather than federal sanctions for G-level or higher deficiencies, in part because they cannot propose a federal CMP if they impose a state sanction and because the total state money penalty that may be imposed may be higher than federal CMPs. California had fewer sanctions than Michigan. California typically investigates complaints under its state licensure authority, which may partly explain why California has fewer reported deficiencies and federal sanctions. We believe it is important for CMS to explore the differences in state enforcement approaches and policies so that it can both identify problem areas and identify best practices that could be disseminated nationwide. Although CMS has taken steps to strengthen the nursing home enforcement process, our review of 63 homes in four states with a history of quality problems identified design weaknesses as well as flaws in the way sanctions are implemented that diminish their full deterrent effect. Some of these homes repeatedly harmed residents over a 6-year period and yet remain in the Medicare and Medicaid programs. Until these systemic weaknesses are addressed, the effectiveness of sanctions in encouraging homes to return to and maintain compliance will remain questionable and the safety and security of vulnerable residents will remain at risk. CMS’s immediate sanctions policy fails to hold homes with a long history of harming residents accountable for the poor care provided. The policy’s complexity, such as the requirement for an intervening period of compliance, prevents its use for the very homes it was designed to address—those with systemic quality problems. Furthermore, the immediate sanctions label is misleading because sanctions are not, in fact, immediate. The notice period required by CMS regulations for sanctions such as DPNAs and terminations provides homes with a de facto grace period during which they can correct deficiencies to avoid an immediate sanction. Moreover, in one state we reviewed, the immediate sanctions policy does not fully identify all homes with repeat serious deficiencies because most complaint deficiencies, which can often trigger a double G, were being cited under state licensure authority, not federal. Consequently, some problem homes in the state were not identified by the policy and thus were able to avoid double G immediate sanctions. Although CMPs and DPNAs were the most frequently used sanctions nationwide and for the homes we reviewed, their effectiveness was undermined by a number of weaknesses. The CMPs levied against the homes we reviewed were often nominal, significantly less than the maximum amounts Congress provided for in statute. To strengthen CMPs, CMS has been developing a CMP grid since 2004 to guide states and regional offices in determining appropriate CMP amounts, and CMS regional offices piloted the grid in 2006. However, its implementation is expected to be optional for states, once again contributing to interstate variation. Despite the nominal amounts, CMPs, unlike DPNAs, do not require a notice period and may be imposed retroactively before the date of the survey. However, these advantages are countered by the fact that, under the Social Security Act, payment by homes of federally imposed CMPs is deferred if they appeal their deficiencies, a process that can take years, diminishing the immediacy of the sanction and further undermining the sanction’s deterrent effect. While there is precedent under the federal surface mining statute, which permits the collection of CMPs before exhaustion of appeals, it is unclear if the informal dispute resolution process available to nursing homes provides the same type of procedural safeguards that courts have pointed to in upholding the mining statute provision. Some states choose to use their own authority to impose state fines, which can sometimes be implemented faster than is possible under federal law. Although CMS has the authority to implement discretionary DPNAs after a 15-day notice period for the homes we reviewed, it did not generally do so. It imposes mandatory DPNAs when criteria are met, which provide homes a 3-month de facto grace period to correct deficiencies. Because many homes we reviewed returned to compliance within 3 months—though often only temporarily—the DPNAs frequently were rescinded. Termination—the most powerful enforcement tool—was used infrequently nationwide and for the homes we reviewed because of states’ and CMS’s concerns about potential access to care and resident transfer trauma. However, we found that some poorly performing homes are located in areas with several other nearby nursing homes. Even though some homes we reviewed cycled in and out of compliance numerous times while continuing to harm residents, CMS allowed them to determine for themselves whether and when to leave the Medicare and Medicaid programs. Even when terminations were imposed, their deterrent effect was undermined by extending some termination dates to give the homes more time to correct deficiencies. CMS’s earlier termination of such troubled homes could have cut short the cycle of poor care. CMS’s revamped Special Focus Facility program would provide for termination of poorly performing homes within 18 months if they fail to show significant improvement in the quality of care provided to residents. Despite the expansion of the program from about 100 to about 135 homes, the number of Special Focus Facilities is inadequate because, as our work has demonstrated, the program still fails to include many homes with a history of repeatedly harming residents. Although CMS has made progress in establishing a database to help it track and monitor the nursing home enforcement process, the development of AEM is not yet complete. AEM is not integrated with other important databases to help ensure that CMS has a comprehensive picture of a home’s deficiency history, and CMS has not developed a concrete plan for using national enforcement reports—built off of AEM data—to help evaluate the effectiveness of sanctions and its enforcement polices. Having longitudinal enforcement data available for homes would enable CMS to pursue increasing the severity of sanctions for homes that repeatedly harm residents. Furthermore, CMS has not developed a system of quality checks to ensure the accuracy and integrity of AEM data. CMS’s Nursing Home Compare Web site has been modified a number of times to add important quality information about nursing homes. While CMS now summarizes the results from both standard surveys and complaint investigations, the Web site contains no information about sanctions implemented against nursing homes, nor does it identify homes that have received immediate sanctions for repeatedly harming residents. Such information could be valuable to consumers who use the Web site to help choose a home for family members or friends. To address weaknesses that undermine the effectiveness of the immediate sanctions policy, we recommend that the Administrator of CMS reassess and revise the policy to ensure that it accomplishes the following three objectives: (1) reduce the lag time between citation of a double G and the implementation of a sanction, (2) prevent nursing homes that repeatedly harm residents or place them in immediate jeopardy from escaping sanctions, and (3) hold states accountable for reporting in federal data systems serious deficiencies identified during complaint investigations so that all complaint findings are considered in determining when immediate sanctions are warranted. To strengthen the deterrent effect of available sanctions and to ensure that sanctions are used to their fullest potential, we recommend that the Administrator of CMS take the following three actions: Ensure the consistency of CMPs by issuing guidance such as the standardized CMP grid piloted during 2006. Increase use of discretionary DPNAs to help ensure the speedier implementation of appropriate sanctions. Strengthen the criteria for terminating homes with a history of serious, repeated noncompliance by limiting the extension of termination dates, increasing the use of discretionary terminations, and exploring alternative thresholds for termination, such as the cumulative duration of noncompliance. To collect CMPs more expeditiously, which could increase their deterrent effect, we recommend that the Administrator of CMS develop an administrative process under which CMPs would be paid—or Medicare and Medicaid payments in equivalent amounts would be withheld—prior to exhaustion of appeals and seek legislation for the implementation of this process, as appropriate. Payments could be refunded with interest if the deficiencies are modified or overturned at appeal. To strengthen sanctions for homes with a history of noncompliance, such as a large number of deficiencies or a large number of actual harm and immediate jeopardy deficiencies, we recommend that the Administrator of CMS consider further expanding the Special Focus Facility program with its enhanced enforcement requirements to include all homes that meet a threshold, established by CMS, to qualify as poorly performing homes. To improve the effectiveness of its new enforcement data system, we recommend that the Administrator of CMS take the following three actions: Develop the enforcement-related data systems’ abilities to interface with each other in order to improve the tracking and monitoring of enforcement, such as by developing an automatic interface between systems such as AEM and ACTS. Expedite the development of national enforcement reports, including longitudinal and trend reports designed to evaluate the effectiveness of sanctions and enforcement policies, and a concrete plan for using the reports. Develop and institute a system of quality checks to ensure the accuracy and integrity of AEM data, such as periodic data audits conducted as part of CMS’s annual state performance reviews. To improve public information available to consumers that helps them assess the quality of nursing home care, we recommend that the Administrator of CMS expand CMS’s Nursing Home Compare Web site to include implemented sanctions, such as the amount of CMPs and the duration of DPNAs, and homes subjected to immediate sanctions. We obtained written comments on our draft report from CMS and three of the four states in which the homes we studied were located—California, Michigan, and Texas. We also received e-mail comments from the Director of the Division of Nursing Care Facilities in Pennsylvania. CMS’s comments are reproduced in appendix VI. California’s, Michigan’s and Texas’s comments are reproduced in appendixes VII, VIII, and IX, respectively. CMS generally concurred with our 12 recommendations in six areas intended to strengthen the enforcement process but did not always specify how it would implement the recommendations. In addition, CMS noted that implementation of 3 of our recommendations raised resource issues and that others required additional research. California concurred with our conclusions and recommendations, while Michigan and Pennsylvania indicated appreciation or general agreement. However, most state comments, including Texas’s, were technical in nature. Our evaluation responds to CMS and state comments in the six areas covered by our recommendations. Addressing weaknesses in the double G immediate sanctions policy. CMS agreed that homes that repeatedly harm residents should not escape immediate sanctions and stated that it would remove the limitation on applying an additional sanction when a home failed to correct a deficiency that gave rise to a prior sanction. CMS also agreed to reduce the lag time between citation and implementation of a double G immediate sanction by limiting the prospective effective date for DPNAs to no more than 30 to 60 days. Reducing the lag time as much as possible is critical because it provides homes with a de facto grace period in which to correct deficiencies and avoid sanctions. Michigan commented about the need to increase the immediacy of DPNAs, noting that even the 15-day notice period associated with discretionary DPNAs was outdated now that homes are notified electronically and delivery can be verified. Currently, CMS has an incomplete picture of serious deficiencies cited against homes that could result in immediate sanctions because California investigates many nursing home complaints under state licensure authority. CMS agreed to collect additional information on complaints for which data are not reported in federal data systems. We believe that CMS’s commitment to do this will help better identify and deal with consistently poorly performing homes. CMS commented that the Social Security Act does not provide authority for CMS to require states to report enforcement actions taken under state-only authority if federal resources are not used for the complaint investigation; however, to the extent that federal funds are used for complaint investigations, our findings and recommendations remain valid. Michigan concurred that CMS needs the complete compliance history of a facility to assess its overall performance. CMS acknowledged that the complexity of its immediate sanctions policy may be an inherent limitation and indicated that it intends to either strengthen the policy or replace it with a policy that achieves similar goals through alternative methods. CMS noted that it is concerned about whether the immediate sanctions policy has negatively affected the rates of state deficiency citations and may ultimately be ineffective with the most problematic facilities. We believe the policy has merit but that its complex requirements have prevented many homes from receiving immediate sanctions. Strengthening the deterrent effect of sanctions. CMS agreed to issue a CMP analytic tool, or grid, and to provide states with further guidance on discretionary DPNAs and terminations. The CMP grid is a tool to help ensure national consistency in CMPs and to assist CMS regional offices in monitoring enforcement actions. Texas commented that it had been using the grid since June 2006 and found it to be very helpful. Michigan noted that it had independently developed and implemented a CMP grid in 2000 but expressed disappointment that CMS had not mandated state use of the agency’s grid. In addition, Michigan supported the need for additional CMS guidance on the use of discretionary termination. Such guidance, it commented, was necessary to ensure a consistent national approach. In response to our recommendation to increase the use of discretionary terminations, CMS stated that it will continue its research to design proposals that yield a more effective combination of robust enforcement actions but that do not penalize vulnerable residents. While we encourage CMS’s commitment to further research to improve the effectiveness of enforcement actions, we believe that CMS must also be committed to protecting residents from actual harm in poorly performing facilities— including terminating homes from the Medicare or Medicaid programs— when other steps fail to ensure the quality of resident care. Collecting CMPs more expeditiously. CMS agreed to seek legislative authority to collect CMPs prior to the exhaustion of appeals, which could increase their deterrent effect. California commented that it supported this recommendation. Expanding the Special Focus Facility program. CMS agreed with the concept of expanding the program to include all homes that meet a threshold to qualify as poorly performing homes, but said it lacks the resources needed for this expansion because of decreases in its budget and increases in both the number of providers and quality assurance responsibilities for state and federal surveyors. CMS stated that it envisioned expansion of the program if Congress fully funds the President’s proposed fiscal year 2008 budget for survey and certification activities. CMS specified other initiatives it will implement to improve the Special Focus Facility program. Improving the effectiveness of enforcement data. CMS agreed to develop and implement a system of quality checks to ensure the accuracy of its data systems, including AEM. While the agency agreed to study the feasibility of linking the separate data systems used for enforcement and to develop other national standard enforcement reports, CMS indicated that available resources may limit its ability to take further action on these issues. CMS has already invested significant resources in developing potentially powerful data systems intended to improve the tracking and monitoring of enforcement, and we believe the agency should place a priority on ensuring that these systems operate effectively. Improving information available to consumers. Rather than agreeing to report all implemented sanctions on its Nursing Home Compare Web site, CMS proposed reporting implemented sanctions only for poorly performing homes that meet an undefined threshold. CMS’s response was therefore not fully responsive to our recommendation. By only reporting sanctions for homes that meet a certain threshold—eight or more sanctions in a 3-year period, in an example provided by CMS—consumers might incorrectly assume that other homes have received no sanctions. Furthermore, CMS’s plan to post such limited sanctions data in an accessible location on its Web site is vague. We believe that consumers must be able to easily link deficiency and sanctions data. CMS and three of the four states also provided technical comments, which we incorporated as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Administrator of the Centers for Medicare & Medicaid Services and appropriate congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7118 or allenk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix X. This appendix provides a more detailed description of our scope and methodology and generally follows the order that findings appear in the report. We analyzed the fiscal years 2000 through 2005 enforcement and deficiency history for a total of 63 of the 74 nursing homes in four states— California, Michigan, Pennsylvania, and Texas—whose compliance history informed the conclusions of our March 1999 report. These homes had a history of providing poor quality care to residents prior to 1999. We excluded 11 of the original 74 homes from our analysis because they either closed before fiscal year 2000 or closed within 6 months of the beginning of fiscal year 2000 and had few or no deficiencies or sanctions. Some of the remaining 63 homes participated in the Medicare and Medicaid programs for only a portion of fiscal years 2000 through 2005 because they either closed permanently or closed temporarily and were subsequently reinstated. For these homes, we set a criterion that required that the home participate for at least 6 months of the fiscal year in order for its enforcement data in that fiscal year to be included in our analysis. Table 8 shows the distribution of homes across the four states in our 1999 report, the distribution of those homes for this report, and the number of providers participating for at least 6 months by fiscal year. Although the table shows some year-to-year fluctuation in the number of providers, the changes do not significantly influence our findings. While the focus of our analysis was the compliance history of these 63 homes, we also analyzed general trends in (1) implemented sanctions nationwide for the same 6-year period and (2) the proportion of homes in each state cited for serious deficiencies—that is, those at the actual harm or immediate jeopardy level. CMS deficiency data. To determine the number, scope, and severity of deficiencies cited for the 63 homes, we analyzed OSCAR (On-Line Survey, Certification, and Reporting system) deficiency data resulting from standard surveys and complaint investigations. We also used OSCAR data on deficiencies identified during standard surveys to analyze state trends in the proportion of nursing homes cited for actual harm or immediate jeopardy during fiscal years 2000 through 2005. Because a home may be surveyed more than once a year, we counted a home only once if it was cited for actual harm or immediate jeopardy on more than one survey during the year. CMS officials generally recognize OSCAR data to be reliable. We have used OSCAR data in our prior work to examine nursing home quality. CMS enforcement data and reliability issues. Because CMS used multiple data systems during the 6-year period we reviewed and because of data reliability issues, such as incomplete or inaccurate data, we used several sources to validate and analyze the enforcement history of the 63 homes. Based on discussions with CMS regional staff who were responsible for inputting the data, our primary data source for homes in California, Michigan, and Pennsylvania for the period fiscal years 2000 through 2004 was the Long Term Care Enforcement Tracking System (LTC). Because CMS’s Dallas regional office expressed concern about reliability of LTC data in the region, we relied primarily on regional office and state enforcement case files for the Texas homes we reviewed. CMS phased out use of LTC at the end of fiscal year 2004 and began using Aspen Enforcement Manager (AEM) to track sanctions. We obtained data for fiscal year 2005 sanctions from the limited AEM data stored in the OSCAR enforcement file. To clarify data from LTC or AEM and to perform some basic data checks, we relied on regional office and state enforcement case files and made adjustments as appropriate. We discussed the reliability of LTC and AEM enforcement data with CMS and state survey agency officials. CMS informed us that the data generally were reliable. We determined that the data were sufficiently reliable to assess broad trends in implemented sanctions nationwide, and to analyze sanctions among the 63 homes we reviewed because we could conduct checks of the homes’ enforcement data using CMS regional office and state case files. Because we could not conduct such checks of the data in all 50 states and the District of Columbia, we did not analyze trends across the individual states. Trends in sanctions. Based on our assessment of data reliability, we determined that we could assess broad trends in implemented sanctions nationwide, but because we could not conduct checks of the data in all 50 states and the District of Columbia, we did not analyze trends across the states. For the homes we reviewed, using data from LTC, AEM, and regional office and state enforcement case files as described above, we analyzed the number of civil money penalties (CMP), denial of payments for new admissions (DPNA), and terminations implemented over two 3-year time periods—fiscal years 2000 through 2002 and fiscal years 2003 through 2005. We aggregated sanctions into fiscal years on the basis of their implementation dates. To determine the duration of DPNAs across the two time periods, we calculated the difference between the effective dates and the end of the DPNAs. To determine the amount of CMPs paid, we used the CMP Tracking System (CMPTS), a CMS financial management system, and aggregated CMPs into fiscal years according to the year in which they were implemented. Based on discussions with CMS officials we determined that data in CMPTS are generally reliable. They also stated that the system is the primary system used by CMS for the collection of CMPs and is the only source for CMP payment data used by CMS. We matched CMP data in LTC and CMPTS based on their collection number. For fiscal year 2005, we relied on regional enforcement files for the amount of paid CMPs. Implementation rate of sanctions. We determined the implementation rate of sanctions imposed for the homes we reviewed in fiscal years 2000 through 2005. The percentage of implemented sanctions was calculated by dividing the number of implemented sanctions by the total number of imposed sanctions. The total number of imposed sanctions included those that were implemented, not implemented, and were pending. We used data from our March 1999 report on imposed and implemented sanctions for the period July 1995 through October 1998. Range of sanctions. CMS enforcement data allowed us to differentiate between per day and per instance CMPs and mandatory and discretionary DPNAs and terminations. We counted the number of sanctions by type and aggregated the number by fiscal year based on the date of implementation. The data provided the value of per day and per instance CMPs, which were used to calculate the median values of CMPs across the two time periods—fiscal years 2000 through 2002 and 2003 through 2005. Cycling in and out of compliance. We analyzed the enforcement data from LTC, AEM, and CMS regional office and state records to determine if the 63 homes we reviewed cycled in and out of compliance from fiscal years 2000 through 2005. To determine the number of times homes cycled in and out of compliance, we counted the number of noncompliance cycles recorded for the 63 homes. A noncompliance cycle begins on the date of the survey finding noncompliance and ends when the home has achieved substantial compliance by correcting deficiencies. For noncompliance cycles for which sanctions were implemented, we examined survey dates, the date substantial compliance was achieved, and the sanctions that were implemented as a result of the deficiencies cited. To determine how quickly homes were again noncompliant, we calculated the difference between the date of the first survey of the subsequent noncompliance cycle and the substantial compliance date of the preceding noncompliance cycle. To quantify the number of noncompliance cycles during which actual harm occurred, we assessed whether homes were cited for G-level or higher deficiencies on the surveys within the noncompliance cycle. Immediate sanctions policy. We identified instances in which the 63 homes we reviewed were cited for repeatedly harming residents to determine if immediate sanctions were imposed and their effect on deterring subsequent noncompliance. To identify sanctions imposed as a result of the immediate sanctions policy, we first identified homes that qualified for immediate sanctions using CMS’s Providing Data Quickly (PDQ) system which prepares a variety of reports using survey and certification data. CMS officials indicate that the data in Providing Data Quickly are generally recognized as reliable. We then matched the survey date in Providing Data Quickly with the survey date in the enforcement data to identify the noncompliance cycle during which qualifying deficiencies were cited. This step enabled us to identify the sanctions imposed. We reviewed each case individually to verify that the sanction was the result of actual harm or higher-level deficiencies that denied the home an opportunity-to-correct period or simply resulted from another survey in the same noncompliance cycle. We also compared the date of survey with the imposition and effective dates of sanctions to assess how much time passed between identification of the deficiency that led to the immediate sanction and the imposition and implementation of the sanction. During the course of our work, we also discussed the rationale behind the specific formulation of the immediate sanctions policy with CMS officials. In order to identify trends in the proportion of nursing homes cited with actual harm or immediate jeopardy deficiencies, we analyzed data from CMS’s OSCAR database for fiscal years 2000 through 2005 (see table 9). Because surveys are conducted at least every 15 months (with a required 12-month statewide average), it is possible that a home was surveyed twice in any time period. If a home was cited for a G-level or higher deficiency on more than one survey during the fiscal year, we only counted it once. Table 10 provides the number of CMPs, DPNAs, and terminations implemented in the nursing homes we reviewed, by state for fiscal years 2000-2002 and fiscal years 2003-2005. It also provides the total amount of CMPs paid and the total duration of DPNAs implemented during the two time periods. The total amount of CMPs payable in the fiscal years may differ from what was paid. This appendix provides additional examples of the compliance history of homes we reviewed that frequently cycled in and out of compliance (see table 6). The table also includes examples of the nature of the deficiencies cited in each noncompliance period. The three homes in table 11 were cited for serious deficiencies—those at the actual harm or immediate jeopardy level—and corrected these deficiencies only temporarily, despite receiving sanctions; on subsequent surveys, they were again found to be out of compliance, sometimes for the same deficiencies. A noncompliance period begins on the first day a survey finds noncompliance and ends when a home both corrects the deficiencies and achieves substantial compliance or the home is terminated from Medicare and Medicaid. Only federal sanctions that were imposed and implemented are included in the table. Nursing Homes: Despite Increased Oversight, Challenges Remain in Ensuring High-Quality Care and Resident Safety. GAO-06-117. Washington, D.C.: December 28, 2005. Nursing Home Deaths: Arkansas Coroner Referrals Confirm Weaknesses in State and Federal Oversight of Quality of Care. GAO-05-78. Washington, D.C.: November 12, 2004. Nursing Home Fire Safety: Recent Fires Highlight Weaknesses in Federal Standards and Oversight. GAO-04-660. Washington D.C.: July 16, 2004. Nursing Home Quality: Prevalence of Serious Problems, While Declining, Reinforces Importance of Enhanced Oversight. GAO-03-561. Washington, D.C.: July 15, 2003. Nursing Homes: Public Reporting of Quality Indicators Has Merit, but National Implementation Is Premature. GAO-03-187. Washington, D.C.: October 31, 2002. Nursing Homes: Quality of Care More Related to Staffing than Spending. GAO-02-431R. Washington, D.C.: June 13, 2002. Nursing Homes: More Can Be Done to Protect Residents from Abuse. GAO-02-312. Washington, D.C.: March 1, 2002. Nursing Homes: Federal Efforts to Monitor Resident Assessment Data Should Complement State Activities. GAO-02-279. Washington, D.C.: February 15, 2002. Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives. GAO/HEHS-00-197. Washington, D.C.: September 28, 2000. Nursing Home Care: Enhanced HCFA Oversight of State Programs Would Better Ensure Quality. GAO/HEHS-00-6. Washington, D.C.: November 4, 1999. Nursing Home Oversight: Industry Examples Do Not Demonstrate That Regulatory Actions Were Unreasonable. GAO/HEHS-99-154R. Washington, D.C.: August 13, 1999. Nursing Homes: Proposal to Enhance Oversight of Poorly Performing Homes Has Merit. GAO/HEHS-99-157. Washington, D.C.: June 30, 1999. Nursing Homes: Complaint Investigation Processes Often Inadequate to Protect Residents. GAO/HEHS-99-80. Washington, D.C.: March 22, 1999. Nursing Homes: Additional Steps Needed to Strengthen Enforcement of Federal Quality Standards. GAO/HEHS-99-46. Washington, D.C.: March 18, 1999. California Nursing Homes: Care Problems Persist Despite Federal and State Oversight. GAO/HEHS-98-202. Washington, D.C.: July 27, 1998. | In 1998 and 1999 reports, GAO concluded that enforcement actions, known as sanctions, were ineffective in encouraging nursing homes to maintain compliance with federal quality requirements: sanctions were often rescinded before being implemented because homes had a grace period to correct deficiencies. In response, the Centers for Medicare & Medicaid Services (CMS) began requiring immediate sanctions for homes that repeatedly harmed residents. Using CMS enforcement and deficiency data, GAO (1) analyzed federal sanctions from fiscal years 2000 through 2005 against 63 homes previously reviewed and (2) assessed CMS's overall management of enforcement. The 63 homes had a history of harming residents and were located in 4 states that account for about 22 percent of homes nationwide. From fiscal years 2000 through 2005, the number of sanctions decreased for the 63 nursing homes GAO reviewed that had a history of serious quality problems, a decline consistent with nationwide trends. While the decline may reflect improved quality or changes to enforcement policy, it may also mask survey weaknesses that understate quality problems, an issue GAO has reported on since 1998. Although the number of sanctions decreased, the homes generally were cited for more deficiencies that caused harm to residents than other homes in their states. Almost half of the homes reviewed continued to cycle in and out of compliance; 19 did so 4 times or more. These homes temporarily corrected deficiencies and, even with sanctions, were again found out of compliance on subsequent surveys. Several weaknesses appeared to undermine the effectiveness of the sanctions implemented against the homes reviewed. First, civil money penalties (CMP), which by statute are not paid while under appeal--a process that can take years--were generally imposed at the lower end of the allowable dollar range. For example, the median per day CMP ranged from $350 to $500, significantly below the maximum of $3,000 per day. Second, CMS favored the use of sanctions that give homes more time to correct deficiencies, increasing the likelihood that the sanctions would not be implemented. Thus, more than half of the denial of payment for new admissions (DPNA) that CMS imposed were the type that give homes 3 months to correct deficiencies rather than those that only give homes up to 15 days. Third, there was no record of a sanction for about 22 percent of the homes reviewed that met CMS's criteria for immediate sanctions, a problem GAO also identified in 2003; moreover, 60 percent of DPNAs imposed as immediate sanctions were not implemented until 1 to 2 months after citation of the deficiency. Finally, involuntary termination of homes from participating in the Medicare or Medicaid programs was rare because of concerns about access to other nearby homes and resident transfer trauma; 2 of the 63 homes reviewed were involuntarily terminated because of quality problems. CMS's management of enforcement is hampered by the complexity of its immediate sanctions policy and by its fragmented and incomplete data. Its policy allows some homes with the worst compliance histories to escape immediate sanctions. For example, a home cited with a serious deficiency and that has not yet corrected an earlier serious deficiency is spared an immediate sanction. Such rules may in part explain why the 63 homes reviewed only had 69 instances of immediate sanctions over a 6-year period despite being cited 444 times for deficiencies that harmed residents. Although CMS initiated development of a new enforcement data system 6 years ago, it is fragmented and has incomplete national reporting capabilities. CMS is taking additional steps to improve nursing home enforcement, such as developing guidance to encourage more consistency in CMP amounts, but it is not clear whether and when these initiatives will address the enforcement weaknesses GAO found. |
In January 2001, we reported on Department of Defense management challenges and noted that the Department has had serious weaknesses in its management of logistics functions and, in particular, inventory management. We have identified inventory management as a high-risk area since 1990. In 1999, we reported on the Air Force’s specific problems in managing spare parts and noted an increase in the percentage of some of its aircraft that were not mission capable due to supply problems. (See appendix I for examples from our reports on management weaknesses related to the Air Force.) Also, the Secretary of the Air Force reported that the readiness of the Air Force has declined since 1996 and attributed this overall decline, in part, to spare parts shortages. Table 1 shows the percentage of all aviation systems that were mission capable and the percentage of aircraft that were not mission capable due to supply problems from fiscal year 1996 through the first quarter of fiscal 2001. As table 1 shows, the percentage of all Air Force systems reported as not mission capable due to supply problems steadily increased from fiscal year 1996 through fiscal year 2000. The Air Force requested additional funding to address concerns with spare parts shortages. The Air Force states in the Department of Defense Quarterly Readiness Report to the Congress for July through September 2000 that funding Congress provided in earlier years has begun to improve the availability of spares, citing a 58-percent reduction in parts that have been ordered but not received since December 1998. The Secretary also expressed cautious optimism that recent congressional funding would improve the availability of spare parts and aircraft mission-capable rates. In the most recent quarterly readiness report (Oct. through Dec. 2000), the Air Force cautions that although as of early December 2000 overall mission-capable rates had improved from average fiscal year 2000 rates, this improvement had come at the cost of the increased use of the practice of removing parts from one aircraft for use on another, that is, cannibalization. Because of concerns that spare parts shortages were causing readiness problems, the Air Force received in fiscal 1999 an additional $904 million in obligation authority from the Department of Defense to buy more spare parts. This amount consisted of $387 million to buy spare parts attributable to the Kosovo operation, $135 million to buy engine-related spare parts for the Oklahoma City Air Logistics Center, and $382 million to overcome the accumulated shortfall of spare parts inventories. Also in 1999, the Department of Defense announced plans to provide $500 million to the Defense Logistics Agency to purchase spare parts for all the services over fiscal years 2001-2004. Of that $500 million, $213.8 million is to be for parts to be used on Air Force aircraft. According to a Department of Defense official, the Air Force was provided the first $50 million in fiscal 2001 to pass on to the Defense Logistics Agency to pay for Air Force parts ordered in fiscal year 2000. The Air Force and the other military services received additional funds in fiscal year 1999 that, unlike the funds cited above, were placed largely in operations and maintenance accounts. In a separate report issued earlier this year, we indicated current financial information did not show the extent to which these funds were used for spare parts. However, the Department plans to annually develop detailed financial management information on spare parts funding uses but does not plan to provide it to Congress. We, therefore, recommended to the Secretary of Defense that the information to be developed annually by the Department and the services on the quantity and funding of spare parts be routinely provided to Congress as an integral part of the Department’s annual budget justification; the Department agreed to do so. The aviation systems that we reviewed are vital to the Air Force achieving its missions. The E-3 provides surveillance of the airspace and manages the flight of all aircraft in an assigned battlefield area. The Air Force first received E-3s in 1977, and an Air Force official told us that it is the oldest aircraft in the Air Force in terms of operational hours flown. The C-5 is the Air Force’s largest cargo aircraft, carrying cargo such as Army tanks, and is one of the largest aircraft in the world. About 70 percent of the oversized cargo required in the critical first 30 days of one major war scenario would be the type of cargo the C-5 carries. The Air Force first received operational C-5 aircraft in 1970, and according to Air Force officials, one of the reasons for the lower than expected mission-capable rates in recent years for the C-5 aircraft is its age. The F-100-220 engine powers many of the Air Force’s F-15 and F-16 fighter aircraft and, according to an Air Force official, will become increasingly critical to operations as some older engines are replaced with the F-100-220. For each of these systems, we judgmentally selected for review 25 parts, a total of 75, with the highest number of hours or incidents of unavailability for given time periods. Air Force spare parts are classified as either consumables or reparables. Consumable items, which are mostly managed by the Defense Logistics Agency, are those items that are discarded when they fail because they cannot be cost-effectively repaired. The Defense Supply Center Richmond is the lead center for managing aviation consumable spare parts. Reparable items, managed by the Air Force Materiel Command, are items that can be cost-effectively repaired. The Command’s mission is to research, develop, test, acquire, deliver, and logistically support Air Force weapon systems. The shortages of spare parts for the three aircraft systems we reviewed have not only affected readiness but also have created inefficiencies in maintenance processes and procedures and may adversely affect the retention of military personnel. Two aircraft we reviewed, the E-3 and C-5, did not meet their mission-capable goals in fiscal years 1996-2000 and were not mission capable due to supply problems from 7.3 percent to 18.1 percent during the same period. The number of usable spare F-100-220 engines that the Air Force had on hand fell short of its goal by as few as 6 and as many as 104 engines during the same period. The Air Force did not achieve its mission-capable goals during fiscal years 1996-2000 for any of the three Air Force aircraft systems we reviewed, in part, due to spare parts shortages. Table 2 shows the mission-capable goals and actual rates for the E-3 aircraft for fiscal years 1996-2000, and table 3 shows the rates at which the E-3 was not mission capable due to supply problems during the same period. The goal for the E-3 was lowered to 73 percent from March through September 2000 based on an Air Force assessment of its ability to achieve its mission-capable goal. The Air Force recognized that it had failed to achieve historical performance levels to the point that falling short of the standard had become the norm. Citing constraints regarding spare parts, maintenance personnel, and repair equipment, the Air Force lowered mission-capable goals for the E-3 and other aircraft with the intent of providing maintenance personnel with more achievable targets. The mission-capable goal for the E-3 aircraft rose to 81 percent in fiscal year 2001, and it is planned to return to 85 percent in fiscal year 2002. The goal was 12 percent or less from March through September 2000 and was raised based on an Air Force assessment of the aircraft’s ability to achieve the not mission capable due to supply problems goal for the E-3 and other aircraft. The Air Force recognized that it had failed to achieve historical performance levels to the point that falling short of the standard had become the norm. Citing constraints regarding spare parts, maintenance personnel, and repair equipment, the Air Force raised its goal for not mission capable due to supply problems for the E-3 and other aircraft with the intent of providing maintenance personnel with more achievable targets. The not mission capable due to supply problems goal changed to 8 percent in fiscal year 2001, and it is planned to return to 6 percent in fiscal year 2002. The reported rate for total not mission capable due to supply problems in fiscal year 2000, 11.3 percent, equated to about 3 or 4 E-3s of the total of 32 aircraft being not mission capable due to supply problems. The C-5 also did not achieve its goals during fiscal years 1996-2000. Table 4 shows the C-5’s mission-capable goals and actual mission-capable rates for those years, and table 5 shows the rates at which the C-5 was not mission capable due to supply problems as well as its goals during the same period. The reported rate for total not mission capable due to supply problems in fiscal year 2000, 18.1 percent, equated to almost 23 C-5s of the fleet of 126 aircraft being not mission capable, at least in part, due to supply problems. With regard to the F-100-220 engine, the Air Force never met its goal, called the war readiness engine goal, during fiscal years 1996-2000 (see table 6). The goal can change each fiscal year for the number of usable— ready to be installed in an aircraft—spare engines the Air Force would like to have on hand to meet wartime needs. In some cases, it has had F-15s or F-16s grounded due to the lack of the engine. When the number of usable spare engines is shown as a negative number, there are not enough engines for all the aircraft required for peacetime operations; in other words, aircraft that would otherwise be available to fly are grounded because they lack engines. During fiscal years 1996 through 2000, this occurred in five different quarters. To compensate for a lack of spare parts, maintenance personnel sometimes remove usable parts from aircraft for which spare parts are unavailable to replace broken parts on others. Maintenance personnel at Seymour-Johnson Air Force Base said that this practice is necessary to attempt to maintain mission-capable rates when spare parts are not available. As we have previously reported, the result of this practice is that maintenance personnel spend a large amount of time cannibalizing parts and performing double work. According to a Naval Postgraduate School thesis, there is also the potential for breaking the needed part or causing collateral damage while removing the part. Additionally, a part removed from another aircraft will likely not last as long as a part from the supply system and will require maintenance sooner. Additionally, our past work shows that spare parts shortages may affect retention. In August 1999, we reported on the results of our December 1998 through March 1999 survey of about 1,000 Army, Navy, Air Force, and Marine Corps active duty personnel that were selected based on their work in jobs in which the Department of Defense believed were experiencing retention problems. More than half of the respondents stated that they were dissatisfied and intended to leave the military. The majority of factors were associated with work circumstances such as the lack of parts and materials needed to successfully complete daily job requirements. Both officers and enlisted personnel ranked the availability of needed equipment, parts, and materials among the top 2 of 44 quality-of-life factors that caused their dissatisfaction. Spare parts shortages on the three systems we reviewed occurred for various reasons. In addition, an internal Department of Defense study found similar reasons for spare parts shortages. Both the Air Force and the Defense Logistics Agency have encountered a variety of problems in contracting for spare parts needed for repairs. Ten (about 13 percent) of the parts we reviewed were unavailable, at least in part, because of contracting issues. These issues included lengthy price negotiations, a contract requirement to have a minimum number of units before beginning repairs, failure of a contractor to meet the delivery date, and termination of a contract. For example, the Defense Logistics Agency did not have a straight pin for the F-100-220 engine in stock because the sole-source company wanted a price that the Agency was unwilling to pay. This resulted in extended negotiations with the company before an award could be made. By the end of April 2000, the lack of this part had caused F-100-220 engines to be not mission capable in nine cases. In another case, to obtain an acceptable price for a contract for the repair of a temperature indicator for the E-3 aircraft, the Air Force was required to provide a minimum of 10 regulators for repair. By the time 10 units were accumulated and shipped, the demand for the part had exceeded the supply. Through March 2000, E-3 aircraft were not mission capable over 19 operational days due to the lack of this part. Also, a contract for an axle beam fitting for the C-5 aircraft had to be terminated because the contractor requested too many delivery schedule extensions. As of July 2000, the equivalent of one C-5 aircraft was not mission capable for 124 operational days. Twelve (16 percent) of the parts we reviewed were unavailable for reasons other than those we have already cited. In one case, the Air Force used an incorrect replacement rate for an engine core, and as a result, the repair of parts was not timely. Through April 2000, F-100-220 engines were not mission capable due to the lack of this part in 33 cases. Also, the limited repair facility capacity for certain spare parts, such as electric generators, created shortages of the parts. By the end of March 2000, E-3 aircraft had been not mission capable for almost 10 operational days due to the lack of this part. In another case, because maintenance facilities prioritize repairs based on current Air Force requirements, a receiver transmitter was not repaired in time to avoid a shortage because higher priority items had to be repaired first. As a result, over 15 operational days of not mission capable time had been accumulated on E-3s by the end of March 2000. In another case, the required part, a vaneaxial fan, was on hand, but E-3 aircraft had accumulated over 15 operational days of not mission capable time by the end of March 2000 because of the time it took to ship the part overseas. In some cases, no spare parts had been purchased when an aircraft was being modified or the technical data for the modification was incomplete. At the end of March 2000, over 10 operational days of not mission capable time had accumulated for E-3 aircraft due to the lack of a control indicator that fell into this category. An internal study conducted by the Department of Defense found similar reasons for Air Force reparable spare parts shortages. The study examined parts causing aircraft to be not mission capable and found that there were two reasons for the shortages. The first reason was an insufficient inventory of certain reparable parts. The second was that although there were enough parts in the system, other constraints prevented a repair facility from repairing the parts in a timely manner. The study states that this may have happened for several reasons. The parts may not have been returned from units to the repair facility, a repair facility may have lacked capacity in certain key areas such as manpower or testing equipment, the consumable parts required to fix the reparable item may not have been available, or the item managers may not have requested the repair facility to repair a part because of a lack of funding.The study contained a recommendation that the Air Force provide $609 million for fiscal years 2002 to 2007 to improve the availability of reparable spare parts. According to a Department of Defense official, the Air Force plans to provide the funds. The Air Force and the Defense Logistics Agency have overall initiatives under way or planned to improve the availability of spare parts. The initiatives are intended to improve the efficiency of the supply system and increase the requirements for spare parts. The initiatives generally address the specific reasons for shortages identified by our review, with the exception of changes in the location of repairs that is not a recurring problem. The Air Force has developed a Supply Strategic Plan that includes a management framework and specific goals and outcome- oriented measures for its initiatives. We have made various recommendations to address this issue. The Air Force has actions under way to respond to address these recommendations; therefore, we are not making any additional recommendations at this time. We will be reviewing the strategic plan’s initiatives, once they are more developed, to evaluate their likely effectiveness and to assess whether additional initiatives are needed. The Air Force is regularly monitoring which spare parts are unavailable for the longest period of time and undertakes ad hoc actions to resolve the problems causing the shortage. In 1999, the Air Force developed the Supply Strategic Plan to help create an integrated process for supply planning, to facilitate the exchange of information throughout the supply system, and to improve measures of effectiveness for the supply system. The plan, which was updated in January 2001, establishes five goals for the Air Force supply community to achieve by 2010. Manage assets effectively Organize, train, and equip supply personnel Support Department of Defense operations Establish and implement fuel policy Each goal has associated objectives to be achieved in the next 4 to 7 years and tasks to be completed in the next 1 to 4 years. In support of the Supply Strategic Plan, the Air Force Deputy Chief of Staff, Installations and Logistics, Directorate of Supply, established in 1999 the Supply Foundation Project, which includes 10 objectives with associated initiatives for each. The Directorate views the project as a comprehensive means of improving the supply system. The first objective is to improve spare parts management. The intent is to determine the baseline for formulating a spare parts policy; to determine the overall trend for spare parts, that is, are shortages increasing or decreasing; and to develop and implement initiatives to reduce the shortages of spare parts. Within the objective of improving spare parts management, the Directorate has initiatives within the goal of managing assets under way or under study. Improve the process for determining requirements for spare parts Improve the process for funding the parts Increase the stock of certain parts Increase the parts contained in readiness spares packages (deployment kits for maintaining aircraft) Coordinate with the Defense Logistics Agency to ensure that it buys the most critically needed parts from the Air Force portion of the $500 million provided by the Department of Defense for fiscal years 2001 to 2004 Reduce the time that customers wait for parts For each of these initiatives, the Air Force has established short-term and long-term milestones and accountability for implementation by assigning program responsibility to specific offices and individuals. The measures for success include achieving goals such as (1) increasing the issuance of parts when requested, (2) increasing the stock of certain parts, (3) improving total rates for aircraft not mission capable for supply reasons, and (4) lowering cannibalization rates. (See appendix IV for a complete listing of these Air Force initiatives.) In addition to the initiatives contained in the Air Force Supply Strategic Plan, the Air Force Materiel Command also has actions under way and planned to separately address more specific aspects of spare parts management and policies. According to Air Force officials, these actions are being coordinated with the Air Force Deputy Chief of Staff, Installations and Logistics, Directorate of Supply. As part of its Constraints Analysis Program, the Air Force Materiel Command identified six major problems that had prevented it from providing timely support to the warfighter. These problems were unavailability of consumable parts; unreliability of parts; poor management of the suppliers of parts; inadequate workload planning; ineffective inventory management; and inefficient policies regarding which parts are repaired and, if repair is needed, where the repairs should be made. The Command focused its initial efforts on studying ways to resolve the problems with supplier management, parts reliability, and unavailability of consumable parts. Implementation plans are being developed for actions for each of these problems while the remaining problems are being studied. The Command is also developing (1) a model to forecast the repair facilities’ demands for consumable spare parts and electronically transmit this data to the Defense Logistics Agency and (2) a pilot program to have contractors bypass the supply system and fill the supply bins for maintenance personnel directly. Among the efforts the Defense Logistics Agency has under way to improve the availability of spare parts are its Aviation Investment Strategy and Aging Aircraft Program. The Defense Logistics Agency’s major initiative to resolve aircraft spare parts shortages is its Aviation Investment Strategy. This fiscal year 2000 initiative focuses on replenishing consumable aviation repair parts with identified availability problems that affect readiness. Of the $500 million that the Defense Department budgeted for this purpose, $213.8 million was the Air Force portion. As of December 2000, $95.3 million had been targeted for Air Force spare parts and $22.3 million worth of parts had been delivered. The goal of the Defense Logistics Agency’s Aging Aircraft Program is to consistently meet the goals for spare parts availability for the Army, Navy, and Air Force aviation weapon systems. The program’s focus will be to (1) provide inventory control point personnel with complete, timely, and accurate information on current and projected parts requirements; (2) reduce customers’ wait times for parts for which sources or production capabilities no longer exist; and (3) create an efficient and effective program management structure and processes that will achieve the stated program goals. The Agency plans to spend about $20 million during fiscal years 2001-2007 on this program. We recommended in November 1999 that the Secretary of the Air Force develop a management framework for implementing best practice initiatives based on the principles embodied in the Government Performance and Results Act. The Department of Defense concurred with our recommendation and stated that the Air Force is revising its Logistics Support Plan to more clearly articulate the relationships, goals, objectives, and metrics of logistics initiatives. As a part of the Supply Strategic Plan, the Air Force included initiatives intended to improve the availability of spare parts. We also recommended in January 2001 that the Department develop an overarching plan that integrates the individual service and defense agency logistics reengineering plans to include an investment strategy for funding reengineering initiatives and details on how the Department plans to achieve its final logistics system end state. Since the Air Force and the Department of Defense are taking actions on our previous recommendations to improve overall logistics planning, we are not making new recommendations at this time. The Acting Deputy Under Secretary of Defense for Logistics and Materiel Readiness, in commenting on a draft of this report, indicated that the Department of Defense generally concurred with the report. The Department’s comments are reprinted in their entirety in appendix V. To determine the impact of the shortages of spare parts, we reviewed data on the Air Force’s mission-capable goals and actual rates and goals and actual rates for aircraft not mission capable due to supply problems for selected months from the Office of the Secretary of the Air Force, Installations and Logistics Directorate. We did not independently verify these data. From these data, we selected three systems for review that had experienced difficulties in achieving mission-capable goals or in the case of the F-100-220 engine readiness goals for the number of usable engines on hand. We also reviewed data on cannibalizations provided by the Air Combat Command, Hampton, Virginia; the Office of the Secretary of the Air Force, Installations and Logistics Directorate, Washington, D.C.; and Seymour-Johnson Air Force Base, Goldsboro, North Carolina. Using the data, we discussed with maintenance personnel the impact of cannibalizations on spare parts shortages. We also used data from studies conducted by the Department of Defense regarding spare parts shortages and their impacts. Lastly, we drew relevant information from our recently issued reports. To determine the reasons for these part shortages, we visited the air logistics centers at Tinker Air Force Base (E-3), Oklahoma City, Oklahoma; Warner-Robins Air Force Base (C-5), Robins, Georgia; Kelly Air Force Base (F-100-220 aircraft engine), San Antonio, Texas; and the Defense Supply Center Richmond, Richmond, Virginia. To identify specific reasons, we discussed the specific parts shortages with those who manage these items at these locations. We also reviewed our related work on Air Force and Department of Defense inventory management practices to identify systemic management problems that are contributing to spare parts shortage. To determine what overall actions are planned or under way to address overall spare parts shortages for Air Force aircraft and the management framework for implementing the overall initiatives, we visited the Air Force headquarters, the Joint Chiefs of Staff Logistics Directorate, and the Office of the Secretary of Defense, located in the Washington, D.C. area; the Defense Logistics Agency located at Fort Belvoir, Virginia, and the Defense Supply Center located in Richmond, Virginia; the Air Force Materiel Command, Dayton, Ohio; and the air logistics centers at Tinker Air Force Base, Oklahoma (E-3), Warner-Robins Air Force Base, Georgia (C-5), and Kelly Air Force Base, Texas (F-100-220). We discussed with officials at each of these locations Air Force initiatives regarding spare parts, their progress and results to date, the planned completion dates for some initiatives, and additional steps needed to address spare parts shortages. We also compared the reasons for the shortages we found with the overall initiatives under way or planned to determine if there were any areas that were not being addressed. We did not review these plans or the specific initiatives. Our review was performed from February 2000 to April 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense; the Secretary of the Air Force; the Director, Office of Management and Budget; and the Director, Defense Logistics Agency. We will also make copies available to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions regarding this report. Key contributors to this report were Lawson Gist Jr., John Beauchamp, Willie Cheely Jr., and Nancy Ragsdale. Our high-risk reports over the past several years have noted that Department of Defense inventory and financial management weaknesses have contributed to parts not being available when needed. In January 2001, we reported on Department of Defense management challenges and noted it has had serious weaknesses in its management of logistics functions and, in particular, inventory management. Although not specifically identified with the systems we reviewed, these management weaknesses directly or indirectly contribute to the shortages of spare parts the Air Force is facing, as the following examples show. We reported in January 2001 that nearly half of the Department’s inventory exceeded war reserve or current operating requirements and that the Department had inventory on order that would not have been ordered based on current requirements. Purchasing items that exceed requirements use funds that could be used to purchase needed parts. We reported in April 1999 that because the Air Force had reduced the supply activity group’s budget by $948 million between fiscal year 1997 and 1999 to reflect efficiency goals and because these goals were not achieved, fewer items than projected were available for sale to customers. As a result, military units had funds to purchase spare parts, but the supply group did not always have sufficient funds to buy new spare parts or pay for repair of broken parts that customers needed. We also reported that because of poor management practices, over $2 billion worth of spare parts in the Air Force’s “suspended inventory category,” which cannot be issued because of questionable condition, was not reviewed for years. As a result, the Air Force is vulnerable to incurring unnecessary repair and storage costs and reducing its readiness. Better management of these parts could increase the number of spare parts available. In addition, the Department of Defense’s long-standing financial management problems may also contribute to the Air Force’s spare parts shortages. As we recently reported, existing weaknesses in inventory accountability information can affect supply responsiveness. Lacking reliable information, the Department of Defense has little assurance that all items purchased are received and properly recorded. The weaknesses increase the risk that responsible inventory item managers may request funds to obtain additional, unnecessary items that may be on hand but not reported. Parts production problem, contracting issue Changes in location of repair Changes in location of repair Actual demands were greater than anticipated and parts production problem Actual demands were greater than anticipated Actual demands were greater than anticipated and other—repair facility capacity/priority and incomplete technical order Parts production problem and component reliability Other-repair facility capacity/priority and test equipment software problem Actual demands were greater than anticipated Parts production problems and contracting issues Changes in location of repairs, actual demands were greater than anticipated, and parts production problems Actual demands were greater than anticipated Other—shipping time Actual demands were greater than anticipated and other—repair facility capacity/priority Actual demands were greater than anticipated Changes in location of repairs Other – shipping time Other—no spares purchased for modifications Changes in location of repairs and other—repair facility capacity/priority Contracting issues and other—incomplete technical data for modification Changes in location of repairs and actual demands exceeded anticipated Actual demands were greater than anticipated Actual demands were greater than anticipated and other—suitable substitute not linked to master Changes in location of repairs Reason for shortage Component reliability Component reliability Parts production problem Changes in location of repairs Changes in location of repairs and parts production problems Actual demands were greater than anticipated Changes in location of repairs and parts production problems Contracting issue and parts production problem Actual demands were greater than anticipated Component reliability Changes in location of repairs Actual demands were greater than anticipated Actual demands were greater than anticipated and parts production problems Component reliability Reason for shortage Component reliability and parts production problem Changes in location of repairs Changes in location of repairs and component reliability Component reliability, contracting issue, and parts production problem Changes in location of repairs and parts production problem Actual demands were greater than anticipated Component reliability and parts production problems Changes in location of repairs Contracting issues Component reliability Parts production problem Parts production problem Actual demands were greater than anticipated Actual demands were greater than anticipated Contracting issue and parts production problem Reason for shortage Contracting issue Contracting issue Actual demands were greater than anticipated and parts production problem Actual demands were greater than anticipated Actual demands were greater than anticipated Actual demands were greater than anticipated Component reliability Component reliability Component reliability and other—information system problem Actual demands were greater than anticipated Changes in location of repairs Component reliability and other—information system problem Changes in location of repairs Changes in location of repairs and other—information system problem Component reliability Changes in location of repairs and demands were not anticipated Actual demands were greater than anticipated Changes in location of repairs and demands were not anticipated Changes in location of repairs Changes in location of repairs Contracting issue The spare parts with the same name have different stock numbers. Defense Inventory: Opportunities Exist to Expand the Use of Defense Logistics Agency Best Practices (NSIAD-00-30, Jan. 26, 2000). Air Force Depot Maintenance: Analysis of Its Financial Operations (AIMD/NSIAD-00-38, Dec. 10, 1999). Defense Inventory: Improvements Needed to Prevent Excess Purchases by the Air Force (NSIAD-00-5, Nov. 1, 1999). Air Force Depot Maintenance: Management Changes Would Improve Implementation of Reform Initiatives (NSIAD-99-63, June 25, 1999). Department of Defense: Status of Financial Management Weaknesses and Actions Needed to Correct Continuing Challenges (T-AIMD/NSIAD-99-171, May 4, 1999). Defense Inventory: Status of Inventory and Purchases and Their Relationship to Current Needs (NSIAD-99-60, Apr. 16, 1999). Defense Inventory: DOD Could Improve Total Asset Visibility Initiative With Results Act Framework (NSIAD-99-40, Apr. 12, 1999). High Risk Series: An Update (GAO/HR-99-1, Jan 1999). Air Force Supply: Management Analysis of Activity Group’s Financial Reports, Prices, and Cash Management (AIMD/NSIAD-98-118, June 8, 1998). Defense Depot Maintenance: Use of Public-Private Partnering Arrangements (NSIAD-98-91, May 7, 1998). Defense Inventory: Management of Surplus Usable Aircraft Parts Can Be Improved (NSIAD-98-7, Oct. 2, 1997). | Spare parts shortages on the three Air Force systems GAO reviewed have undermined the performance of assigned missions and the economy and efficiency of maintenance activities. Specifically, the Air Force did not meet its mission-capable goals for the E-3 or C-5 aircrafts during fiscal years 1996-2000, nor did it have enough F-100-220 engines to meet peacetime and wartime goals during that period. These shortages may also affect personnel retention. GAO recently reported that the lack of parts and materials to successfully complete daily job requirements was one of six major factors causing job dissatisfaction among military personnel. Item managers at the maintenance facilities often indicated that spare parts shortages were caused by the inventory management system underestimating the need for spare parts and by delays in the Air Force's repair process as a result of the consolidation of repair facilities. Other reasons included difficulties with producing or repairing parts, reliability of spare parts, and contracting issues. The Air Force and the Defense Logistics Agency have planned or begun many initiatives to alleviate shortages of the spare parts for the three systems GAO reviewed. |
DISA was included in our review because of its unique role in DOD’s information processing and the Year 2000 process. DISA is responsible to the ASD/C3I for maintaining DIST as the Department’s enterprise inventory database and its primary tool for performing oversight of the Year 2000 correction efforts. In assessing DIST’s effectiveness in facilitating Year 2000 efforts, we interviewed DIST managers and a representative from the contractor. Since the services have different approaches to entering data in DIST, we spoke to officials at various organizational levels regarding ease of use and how they are entering information. In addition, we analyzed the contents and capabilities of DIST to gauge its accuracy, performance, reliability, and usefulness as a Year 2000 enterprise inventory database. In conducting this analysis, we relied on our previous work on DIST which was conducted as part of a review on Defense’s migration strategy—a DOD effort focused on improving and modernizing automated information systems. We also reviewed Air Force and Army comparisons of DIST inventories against their own inventories. In addition, we assessed whether DIST conformed to system inventory-related guidance included in our Year 2000 Assessment Guide, and DOD’s Year 2000 Guidance Package and Year 2000 Management Plan. We specifically focused on the Assessment Phase of the Year 2000 process described below, during which agencies are to develop an enterprise inventory. We conducted our work from November 1996 through July 1997 in accordance with generally accepted government auditing standards. The Department of Defense provided written comments on a draft of this report. These comments are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix I. Under DOD’s Year 2000 Management Plan, DISA is responsible for enhancing and maintaining DIST as a Year 2000 enterprise inventory tool. In February 1997, we published the Year 2000 Computing Crisis: An Assessment Guide, which addresses common issues affecting most federal agencies and presents a structured approach and a checklist to aid them in planning, managing, and evaluating their Year 2000 programs. The guidance is consistent with DOD’s Year 2000 Management Plan. The guide describes five phases—supported by program and project management activities—with each phase representing a major Year 2000 program activity or segment. The phases and a description of what each entails follow. Awareness: Define the Year 2000 problem and gain executive-level support and sponsorship. Establish a Year 2000 program team and develop an overall strategy. Ensure that everyone in the organization is fully aware of the issue. Assessment: Assess the Year 2000 impact on the enterprise. Identify core business areas and processes, inventory and analyze systems supporting the core business areas, and rank their conversion or replacement. Develop contingency plans to handle data exchange issues, lack of data, and bad data. Identify and secure the necessary resources. Renovation: Convert, replace, or eliminate selected platforms, applications, databases, and utilities. Modify interfaces. Validation: Test, verify, and validate converted or replaced platforms, applications, databases, and utilities. Test the performance, functionality, and integration of converted or replaced platforms, applications, databases, utilities, and interfaces in an operational environment. Implementation: Implement converted or replaced platforms, applications, databases, utilities, and interfaces. Implement data exchange contingency plans, if necessary. In addition to following the five phases described, a Year 2000 program should also be planned and managed as a single, large information system development effort. Agencies should promulgate and enforce good management practices at the program and project levels. As discussed in our Year 2000 Assessment Guide, agencies need to ensure that they have complete and accurate enterprisewide inventories of their information systems during the assessment phase of the Year 2000 correction effort. This inventory helps the agency analyze the systems supporting its core business processes and rank its conversion or replacement based on key factors, such as business impact and the anticipated date the systems would experience Year 2000-related date problems. The inventory also plays a very critical role in the later stages of the Year 2000 process, which include renovation, validation, and implementation. For example, the inventory can be used in monitoring the status of each system included in DOD’s Year 2000 efforts, assessing whether the most mission-critical systems are receiving appropriate attention, determining needs for testing facilities, and identifying areas that may require additional resources. The inventory can also assist in identifying and coordinating interfaces between and among systems. Even if all systems within one organization were made Year 2000 compliant, an external interfacing system on which the system is dependent for data or information processing can still introduce and propagate Year 2000-related errors. Having an accurate and reliable enterprisewide systems inventory is also fundamental to having a good information technology investment process. In today’s environment of rapidly changing information technology and the demands for government organizations to operate effectively and more efficiently, agencies need to ensure that their information technology projects are being implemented at acceptable costs, within reasonable and expected time frames, and are contributing to tangible, observable improvements in mission process. In order to make the kinds of trade-off decisions that would produce these benefits, good visibility into their information system environment is indispensable. The enterprisewide inventory of information systems provides this visibility. In addition, Defense will need a reliable and complete system inventory in order to successfully implement the recently passed Clinger-Cohen Act of 1996, which aims to ensure that agencies strengthen their information technology investment processes. Among other things, this act requires that agencies (1) provide their senior managers with timely and accurate information on system costs and (2) have the capability to meet performance requirements, timeliness, as well as other conditions. As discussed in our Year 2000 Assessment Guide, system inventories serve as a useful Year 2000 decision-making tool, by offering added assurance that all systems are identified and linked to a specific business area or process, and that all enterprisewide cross boundary systems are considered. Thus, good inventories include information for each system on (1) links to core business areas or process, (2) systems platforms,languages, and database management systems, (3) operating system software and utilities, (4) telecommunications, (5) internal and external interfaces, (6) systems owners, and (7) the availability and adequacy of source code and associated documentation. Defense has designated the Defense Integration Support Tools database to be the departmentwide automated information systems inventory for use in making information technology decisions and managing the Year 2000 effort. DIST was originally designed to track Defense migration systems for the Corporate Information Management initiative but has evolved into a multipurpose tool. DIST presently contains over 9,000 systems and has a total capacity of 40,000. Each system is provided with its own identification number and should be accompanied by a host of informative data elements, including information on hardware platforms, operating systems, applications languages, communications, and interfaces. Early in its Year 2000 effort, DOD recognized the value of having a reliable enterprisewide system inventory and the potential beneficial role its DIST database could have in the initiative. For example, in November 1996, the Under Secretary for Defense (Comptroller) and the Assistant Secretary of Defense for Command, Control, Communications and Intelligence issued a joint memorandum to senior Defense managers stating that they considered DIST to be “the backbone tool for managing the Department’s Information Technology investment strategies, identifying functional information systems interfaces and data exchange requirements, and managing the efforts to fix the Year 2000 problem.” In its Year 2000 Management Plan, Defense reaffirmed that DIST will be the official repository for the DOD components and added that the reason components are required to report every quarter on their systems and are encouraged to report significant progress on their systems is “to give DOD the visibility necessary to ensure a thorough and successful transition to Year 2000 compliance for all DOD systems.” It also stated that this reporting “will also keep other functional , that your systems interface with or exchange data with, informed as to the status of your Year 2000 compliance progress.” Finally, Defense noted that the DIST needed to be up-to-date so that it could keep the Congress informed on the Department’s efforts to achieve Year 2000 compliance. Defense has recognized that DIST is currently not a reliable and accurate management tool that can have a beneficial impact on the Year 2000 effort or on other initiatives to improve and manage information systems. As a result, the ASD/C3I and DISA have undertaken initiatives to improve the reliability of DIST data and to increase their user friendliness. These efforts will address a wide range of problems associated with data integrity and the ability of users to have direct and quick access to the database. During our review, DOD officials and users told us that updating DIST was traditionally a low priority for the services and components largely because DIST is an antiquated and labor-intensive system. A number of officials also told us that they have grown frustrated with DIST because it contains erroneous data and that they are now reluctant to use DIST because they do not have confidence in the accuracy or reliability of the data it contains. Our analysis of DIST as well as comments by officials in DOD components have revealed significant data integrity problems associated with DIST’s ability to transfer information to other information systems. The following examples below illustrate the magnitude and range of problems pervading the database. DIST managers, service-level Year 2000 teams, and component Year 2000 teams acknowledge that the database contains duplicate, outdated, and erroneous information. The Air Force’s Year 2000 team compared its own Year 2000 database to DIST and found over 1,100 systems that were shown on DIST but not on its database. The Army’s Year 2000 team found a discrepancy of over 200 systems when it compared its system inventory to the DIST. The Army team also stated that it does not trust the data in DIST and that it would continue to update and rely on its own Year 2000 database instead of DIST. Air Force, Army, and DIST Year 2000 focal point representatives agree that until DIST is purged of duplicate, outdated, and erroneous information, the service-level databases contain the most accurate inventories for those agencies. Many systems in DIST do not have complete status and descriptive information. Each entry in DIST is supposed to include over 140 data elements, such as name, size, system manager, software, hardware, and interfaces. But for many systems, managers responsible for the systems have merely entered “placeholder” information, that is, the bare minimum of information required to get the system into the database. In some cases, this may mean that only the system name appears in the database. At present, DIST contains an undetermined amount of these incomplete entries. However, a February 1997 Defense analysis of migration systems listed in DIST illustrated that there are high levels of incomplete data. The analysis, which was conducted on the 223 migration systems included in DIST, found that 55 percent of the migration systems did not identify interfaces with other systems, 77 percent did not disclose the computer installations where the system 68 percent did not indicate the computer hardware on which the system 61 percent did not disclose the system software, and 26 percent did not identify the organization responsible for the system. When we analyzed DIST as part of our review of Defense’s migration effort, we also found that the database contained a high number of inaccurate system implementation and termination dates. For example, for three functional areas—clinical health, civilian personnel, and transportation—DIST showed that 92 legacy systems were terminated by April 1996, while functional managers told us that only 43 had actually been terminated. And, DIST showed that 53 legacy systems were scheduled for future termination, but functional managers told us 91 were slated for termination. Our migration review also found that DOD had not ensured that the data definitions used in DIST were fully compatible with data maintained in other Defense information systems that track and report on systems. Without standard definitions and formats, data cannot be easily transferred to DIST from other systems that may be used by the DOD Principal Staff Assistants, program managers, and other decisionmakers. Although DOD has progressed in populating the DIST database, component officials told us that they have been confused about what is to be entered. Since Year 2000 efforts began, for example, components were unsure what qualifies as a system. The Office of the ASD/C3I has just recently addressed the issue in a memo and its DOD Year 2000 Management Plan. The plan now states that mission-critical systems, migration systems, legacy systems, systems with an annual operating budget over $2 million, and any system that interfaces with the previous criteria must be reported to DIST. All other systems must be accounted for in a “one-line entry” to the ASD/C3I office. This new criteria will prompt DOD systems managers to revisit their Year 2000 project plans and apply this new criteria for reporting. Component and service officials indicated that inputing information into DIST is time consuming and difficult, and the rules for entering and updating data are unclear. For example, database tables that would provide information on hardware manufacturers, series, and models are not up-to-date. Yet, as late as May 1997, no new entries on these hardware data elements were allowed to be made to the database. Also, while DOD components are required to enter Year 2000-related information on weapons systems into DIST, the database itself was not designed to apply to weapon systems or embedded systems. Without guidance on what data elements are applicable to what type of system, it is difficult to decide what information to enter on weapon systems and embedded systems. Component and service officials indicated that DIST cannot be easily queried and does not provide timely feedback. For example, components and services cannot directly query DIST for information. Instead, they have to request that a query be made by DIST managers. The lack of user friendliness and querying capabilities has compounded the level of distrust in DIST by service and component-level managers responsible for addressing the Year 2000 problem and further diminished the incentive to keep the database updated. DIST also does not contain key scheduling and tracking information, such as when critical systems within the services’ and components’ Year 2000 programs will be in the various phases and whether a system is behind schedule. Managers of interfacing systems need to know this information to coordinate key Year 2000 activities such as the start of system renovation, testing, and implementation of the modified system and to determine, as well as whether software bridges will be necessary. Because the data in DIST are incomplete, inaccurate, and difficult to use, a number of Defense components and military services have developed and are relying on their own system inventories to manage and oversee their Year 2000 efforts. During our review, however, officials from the Navy informed us that they will be using DIST for their Year 2000 efforts because they do not have a servicewide inventory of their own. DIST managers are planning to implement new releases in September and October 1997 to make DIST a more user friendly tool and enable the services and components to directly query the database. They are also planning to increase the accuracy of the tool by developing a purging methodology to validate the data in DIST. The new DIST releases, which DISA has made partially available and plans to make fully available by October 1997, are designed to make it easier to input changes into DIST through the use of such features as on-line help pages, navigational buttons, and expanded tables on hardware and software types. The new versions are also designed to make it easier to send and receive database information. While the services and components will be able to directly query the database for some types of information, they will not be able to enter or obtain data related to the Year 2000 problem, such as progress-related information that we believe is necessary for effective system management and departmentwide oversight of Year 2000 program status. The purging methodology is the first step of a systematic program of improving the quality and accuracy of DIST data. Its purpose is to identify duplicate, inactive, and incomplete data. DIST managers cautioned that the purge has to be done carefully. While some older systems may be obsolete, they may be attached to smaller, feeder systems which are not obsolete. These smaller systems may not be readily identifiable on the database. Other systems that may appear obsolete on the database may actually be older legacy systems with no recent updates. At the end of January of 1997, DIST officials told us that it would take 90 days just to determine the methodology for the purge. However, as of July 1997, the methodology to purge the DIST database and ensure the validity of information it contains had not been completed. DISA officials told us that their inability to obtain funds to make the needed improvements was the reason for delays in completing DIST modifications. Although the ASD/C3I recently provided $2.5 million in funding for the upgrades, this delay has resulted in the database not being valid and usable for managing corrective actions while most of DOD is in the assessment phase, a phase which the Department as a whole planned to complete during June 1997. DOD’s unwillingness to fund needed improvements to DIST until recently is inconsistent with both its previously stated importance of DIST to DOD’s Year 2000 program, and the ability of DIST to be the primary tool of DOD’s future information technology efforts. Efforts to improve DIST may be further slowed by the failure of the military services and their components to input information on all of their systems into the database. The DOD Comptroller and the ASD/C3I recognized that earlier calls for the services and components to enter information into DIST did not succeed in completing the inventory. Consequently, they have set deadlines for entering this information and warned the services and components that if their systems were not entered into the database, they would risk losing funding for them. However, this deadline has been changed several times—from January 15, 1997, to March 5, 1997, to April 18, 1997. A DISA spokesperson recently reported that a new deadline would be established because they have not completed the DIST upgrade. Accordingly, as the June 1997 deadline for completion of the Year 2000 assessment phase for the Department passed, the database still remained incomplete. We believe that if DIST improvement efforts are not expedited, the inventory will be of little use to the services and components during the remaining critical stages of the Year 2000 correction efforts as well. The potential consequences of not having this inventory for the assessment phase and the remaining phases of the Year 2000 effort are significant. First, without having a complete and reliable DIST during the assessment phase, DOD organizations that plan to use DIST would not have it as a management tool for ranking systems based on their importance to their mission and, in turn, ranking systems for correction. Many DOD components can utilize their own inventories, assuming they are accurate and reliable, to do this, but the Navy will not be able to since it does not have a servicewide inventory and it was planning to use DIST for this purpose. Second, the Department as a whole will be constrained in its ability to ensure that all systems owned by the military services and components are being made Year 2000 compliant. While the Department can use individual service and component inventories for this purpose, there is a chance that some systems which fall between the boundaries of ownership of the components may not be reflected in any inventory. Third, without an enterprisewide inventory, Defense cannot adequately ensure that all interfaces are properly identified and corrected. Fourth, for DIST to be an effective enterprise inventory, it is necessary to add data fields that provide DOD, the components, and the individual organizations with a much needed mechanism to track the progress of both the overall program and, if necessary, individual programs. Such a mechanism is needed to quickly identify schedule delays, enact timely corrective measures, and if necessary, trigger contingency plans. Finally, in not having a single, enterprisewide inventory, the Department will not be able to readily identify areas that may need additional resources, such as testing facilities. The concerns we raised above demonstrate that if immediate attention is not given to ensuring that DIST is reliable, complete, and accurate, the Department’s Year 2000 efforts will be at risk of failing. In addition, without a good enterprisewide system inventory, Defense will not be in a position to make the trade-off decisions necessary to ensure that information technology projects are being implemented at acceptable costs, within reasonable and expected time frames, and are contributing to tangible, observable improvements in mission process. Given the fact that Defense has a major effort ongoing to improve its information systems, and that the Year 2000 problem will likely call on the Department to divert resources from other information technology-related initiatives, decisive action is needed to provide the resources and schedule priorities needed to accomplish DIST improvements, and to ensure that the currency and accuracy of DIST information is maintained in the future. In order to ensure that DIST can be effectively used for Year 2000 efforts, we recommend that you direct your staff assigned to oversee implementation of the DOD Year 2000 Management Plan and the Director of the Defense Information Systems Agency to ensure that all duplicate, inactive, and incomplete entries be identified and expedite development and implementation of the purging methodology, expand Year 2000 information included in DIST for individual systems to include key program activity schedules that managers of interfacing systems need to ensure that their system interfaces are maintained during the renovation phase. This expansion should also include information that will enable the Office of the ASD/C3I, component, and organizational-level Year 2000 program officials to quickly identify schedule delays, promptly correct them, and if necessary, trigger contingency plans. After the new criteria for reporting information systems are applied by system managers, we recommend that your staff, and the Director of DISA, in conjunction with the services and components, act to ensure that the DIST database is kept up-to-date and accurate, identify instances of noncompliance so that responsible command organizations can take corrective actions, and move forward with any other initiatives needed to make DIST an effective management tool. The Department of Defense provided written comments on a draft of this report. These comments are summarized below and reprinted in appendix I. The Assistant Secretary of Defense for Command, Control, Communications and Intelligence concurred with our recommendations. In concurring with our recommendations Defense stated that it planned to perform statistical sampling of DIST data to validate accuracy, and that it would rely on the DOD Inspector General to validate DIST data accuracy during its Year 2000 audits. It stated that the services and components were responsible for entering their automated information systems into DIST or be at risk of losing funding for their systems. Also, DISA has instituted a data quality program for DIST which includes purging of duplicative and obsolete data and will assist users in completing systems entries as necessary. These actions will help to enable DIST to become an effective tool for both DOD management oversight and for the components day-to-day management of the department’s Year 2000 system correction efforts and beyond. However, in order to ensure complete validation of DIST, we believe that the Office of the ASD/C3I and DISA need to supplement these actions with efforts that involve fully comparing service inventories (and command inventories in the case of the Navy) to DIST and reconciling differences identified. Further, these offices must play a more active role in ensuring that data fields necessary to track Year 2000 progress are included in DIST upgrades and that this information is also reconciled with the services and components specific Year 2000 project status databases. We appreciate the courtesy and cooperation extended to our audit team by your representatives and DISA officials and staff. Within 60 days of the date of this letter, we would appreciate receiving a written statement on actions taken to address these recommendations. We are providing copies of this letter to the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs; the Chairmen and Ranking Minority Members of the Subcommittee on Oversight of Government Management, Restructuring and the District of Columbia, Senate Committee on Governmental Affairs, and the Subcommittee on Government Management, Information and Technology, House Committee on Government Reform and Oversight; the Honorable Thomas M. Davis, III, House of Representatives; the Secretary of Defense; the Deputy Secretary of Defense; the Acting Under Secretary of Defense (Comptroller); the Director of the Defense Information Systems Agency; and the Director of the Office of Management and Budget. If you have any questions on matters discussed in this letter, please call me at (202) 512-6240 or Carl M. Urie, Assistant Director, at (202) 512-6231. George L. Jones, Senior Information Systems Analyst David R. Solenberger, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) efforts to improve the Defense Integration Support Tools (DIST) database, which serves as the DOD inventory of automated information systems and is intended to be used as a tool to help DOD components in addressing year 2000 date problems. GAO noted that: (1) a critical step in solving the year 2000 problem is to conduct an enterprisewide inventory of information systems for each business area to establish the necessary foundation for year 2000 program planning; (2) a thorough inventory also ensures that all systems are identified and linked to a specific business area or process, and that all enterprisewide cross-boundary systems are considered; (3) in addition, the inventory can play a critical role in the later stages of year 2000 correction; (4) for DOD, this inventory is particularly important given the tens of thousands of systems and the many interfaces between systems owned by the services and DOD agencies and considering that these systems vary widely in their importance in carrying out DOD missions; (5) in such a complex system environment, the inventory helps facilitate information technology resource and trade-off decisions; (6) the Office of the Assistant Secretary for Command, Control, Communications and Intelligence (ASD/C3I) and Defense Information Systems Agency (DISA) have recognized that, at present, DIST, the Department's enterprisewide inventory, is not a reliable and accurate tool for managing DOD's year 2000 effort; (7) as a result, the Office of the ASD/C3I and DISA have initiated efforts to: (a) improve the integrity of DIST inventory information; (b) facilitate access to information within the database; and (c) ensure that services and components input information needed to complete the inventory; (8) however, given the pace at which these efforts have been proceeding, GAO does not believe that DIST will be usable and reliable in time to have a beneficial impact on year 2000 correction efforts; (9) without a complete inventory, the Department as a whole cannot adequately assess departmentwide progress toward correcting the year 2000 problem and address crosscutting issues--such as whether system interfaces are being properly handled and whether there is a need for additional testing facilities; and (10) thus, the Office of the ASD/C3I and DISA need to expedite efforts to complete the DIST inventory before substantial renovation efforts begin in the services and components, and ensure that the information in DIST is accurate, complete, reliable, and usable. |
States have contracted out social services for decades. Federally funded social service programs generally support the financial, employment, and other public assistance needs of children and families. In recent years, the amount of contracting for state-administered social services has increased and the nature of privatization has changed significantly. State governments have increased their spending on privatized services, and strong support from state political leaders and high-level program managers has helped prompt new privatization initiatives. Recent changes in social service privatization have also been spurred by changes in federal legislation. As a result of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (P.L. 104-193), for example, states are now permitted under TANF to privatize eligibility determinations, a function traditionally performed by state governments. To help ensure program accountability in federally funded social service programs, the Department of Health and Human Services (HHS) has responsibility for overseeing state performance. With the fundamental changes in the magnitude and nature of social service privatization, states continue to face new challenges—utilizing competitive markets, developing performance-based contracts, and enhancing program accountability—that program officials, contracting experts, and others believe warrant continued focus. When contracting for social services, states often seek to achieve fair and open competition among those who submit contract proposals. To protect opportunities for all qualified contractors to compete openly and fairly for government business, states may, among other things, limit certain activities of former government employees seeking employment with private organizations and prohibit financial, programmatic, and other conflicts of interest. Studies have specified that state ethics policies should apply to a broad range of public employees, including legislators, political appointees, program managers, and others involved in the contracting process, while minimizing to the extent possible the limits placed on the discretion of state employees to choose public or private employment. State governments and social service contractors often work in tandem to provide diverse program services. In response to state RFPs, contractors submit proposals they believe address state needs. Contractors recruit and hire qualified specialists to maximize their competitive positions, while at the same time government employees exercise their prerogatives in an open labor market to pursue private sector careers where they can apply their talents in return for pay and benefits commensurate with their experience and expertise. In this way, social service contractors make use of a flexible labor pool in their attempts to meet state service needs. While these practices may benefit states and social service contractors, from another perspective, the movement of government employees to work for contractors may also reduce the capacity of states to manage public services and may confer unfair advantages to certain offerors. Although many child support enforcement and TANF senior program managers left their positions from 1993 to 1998, about a quarter of them left to take positions with social service contractors. State employees generally joined contractors to increase their income. The states we examined were able to fill vacancies created by the loss of child support enforcement and TANF program managers and other staff with minimal disruption. However, Texas child support enforcement officials expressed concern over their losses in mid-level information technology (IT) personnel and related impacts on program services. Nationally, many senior program directors in both the child support enforcement and TANF programs left their positions in the last 5 years. About a quarter of these officials took positions with social service contractors. According to federal and state program officials, of the 41 states in which the child support enforcement director left that position, 11 directors went to work for social service contractors. Similarly, of the senior TANF program managers who left their positions in 40 states, 10 joined the staffs of social service contractors. According to state program officials we interviewed, senior program directors most often leave their jobs to retire, fill other government positions, or respond to changes in a state’s administration. Contractors told us that they recruit more from state child support enforcement programs than from TANF-supported programs. According to contractor officials, child support enforcement demands a high degree of technical expertise, particularly with respect to state information systems. Through such hiring practices, contractors believe they are in a better position to meet state program needs. State officials noted that personnel who leave the government for social service contractors generally do so to improve their salaries and benefits. Benefits such as stock option and profit-sharing plans offered by some companies are appealing and often critical to employees in weighing a decision to leave public service for private sector careers. Although pay and benefit considerations were often cited as the leading reasons state personnel left their positions for the private sector, we also found one instance in which state law resulted in state employees leaving their government jobs to become private sector employees. In 1995, Maryland’s legislature required two locations—Baltimore City and Queen Anne’s County—to privatize all child support enforcement services. The legislation also required that the selected contractor offer employment to state employees affected by the privatization. Of the over 300 employees who were affected, 213 accepted employment with the selected contractor, while many of the remaining employees retired or accepted jobs elsewhere. Some state officials we interviewed reported that they experienced limited impacts on program management after losing program management staff. We were told that the loss of senior officials in their states caused minimal disruption to the administration of the child support enforcement and TANF programs. These officials also reported that when they lost middle management and staff-level state employees to contractors, such losses did not cause disruption to program administration, as agencies were able to train new employees. Child support enforcement officials in Texas said that about 80 percent of their IT personnel, such as systems analysts and programmers, left state government jobs to join various firms that contract with the child support enforcement program and other program areas. The director of Texas’ child support enforcement program indicated that the movement of IT personnel to the private sector has often been driven by private sector salaries that are up to about 40 percent higher than salaries for comparable government positions. The loss of these employees resulted in longer-term program impacts than did the loss of senior program managers in other states. In those instances when Texas could not replace the IT personnel it had lost, state officials said they had to contract for IT services at a cost higher than would have been incurred if such services had been performed by government employees. According to state child support enforcement program officials, the net loss of IT personnel resulted in poor or reduced service to the public, because without timely upgrades to automated systems, program personnel could not easily access case information, update files, or respond to customer inquiries. Among the proposals we reviewed, we found that child support enforcement and TANF-related proposals listing former state employees from any state as key personnel resulted in contract awards about as frequently as did proposals that did not list such employees. Of the 59 child support enforcement and TANF contract proposals submitted in the four states we reviewed, 34 listed at least one former state employee as key contract personnel. Twenty-five of these proposals did not list any former state employees as key contract personnel. Those proposals that did not list former state employees as key personnel were awarded contracts about as often as those proposals that did. Slightly under two-thirds of the proposals from each group, that is, those listing state employees and those not, resulted in contracts being awarded. Thirty-eight percent of the proposals that listed former state employees, and 36 percent of those that did not, did not result in contract awards. When we examined the child support enforcement and TANF programs separately, we still found that, in each program, proposals not listing former state employees resulted in contract awards about as often as proposals listing such employees. These comparisons are summarized in figure 1. Even when contractors listed former state employees as key personnel from the state offering the contract, the difference in the proportion of contracts awarded among these proposals and the proportion awarded among proposals not listing such employees was not statistically significant. Of the 18 proposals that listed employees from the same state that offered the contract, 14 resulted in contract awards. By comparison, of the 41 proposals that did not list such employees, 25 resulted in contract awards. Many states, in an effort to help ensure open and fair competition among contractors, have established ethics policies. However, more than one-third of the states lack one or more of the key ethics provisions, such as those prohibiting certain postemployment activities and conflicts of interest, which ABA and other organizations recommend as critical to state efforts aimed at protecting competitive contracting. In addition, the states we examined also differ widely in their approaches to enforce ethics policies. To address the disparities in state ethics policies, model laws prepared by organizations such as ABA offer frameworks that states can use to strengthen their ethics policies. Also, the Medicaid statute may offer a model in that it requires participating states to have in place conflict-of-interest provisions applicable to those involved in the program equivalent to federal conflict-of-interest requirements. Many state ethics policies aimed at helping ensure open and fair contracting have shortcomings relative to the provisions widely recommended for protecting the integrity of the competitive contracting process. In some states, ethics provisions apply only to a limited number of state employees, leaving others who may be involved in the contracting process uncovered by them. In other states, ethics provisions differ as to the type of activity prohibited and the period of time covered by the prohibition. Moreover, more than one-third of the states lack one or more ethics provisions, such as restrictions against certain employment activities by former state employees and prohibitions intended to deter the misuse of public office for private gain. The weaknesses in state ethics policies are demonstrated in the examples summarized here: State ethics provisions applicable to a limited number of employees. Oregon has provisions restricting the employment activities of former state employees. However, these restrictions apply only to a limited group of former state employees who held positions specifically listed in the law and not to the full range of positions that may involve contracting. (Or. Rev. Stat. 244.045 (1997)) State postemployment restrictions have gaps. South Carolina’s ethics provisions apply only to former state employees that accept employment from an organization regulated by the state agency where they formerly worked or if this employment involves a matter in which they participated directly and substantially. (S.C. Code Ann. 8-13-755 (1997)) Hawaii’s ethics provisions place some employment limitations on former employees and legislators but also expressly provide that those limitations do not prohibit a state agency from contracting with them to act on behalf of the state. (Haw. Rev. Stat. Ann. 84-18 (1998)) Length of states’ postemployment prohibitions varies. Kansas’ ethics provisions prohibit former state officers or employees from accepting employment with a person or business if they participated in the making of any contract with that person or business. The prohibition lasts for 2 years from the time the contract is completed or from the time the state employment ended, whichever is sooner. (Kan. Stat. Ann. 46-233 (1997)) In contrast, Kentucky’s provisions prohibit for 6 months after termination of state service certain former officials from participating in or benefiting from any contract involving the agency where they were employed. The provisions also prohibit such individuals from accepting employment, compensation, or other economic benefits from any person or business that contracts with the state on a matter in which the former official was directly involved during the past 3 years of state service. (Ky. Rev. Stat. Ann. 11A.040 (1998)) According to a 1996 study completed by the Council of State Governments and the American Society for Public Administration, 17 states lacked one or more of the ethics provisions ABA and other organizations believe are necessary to promote open and fair competitive contracting, as summarized in table 1. Of these 17 states, 9 did not restrict postemployment activities of former state employees with organizations that compete for government contracts. For example, Arkansas does not prohibit postemployment activities of former state employees that could have a bearing on social service contracting. Eight states lacked provisions limiting the direct involvement of former public employees in competitive contracting. State enforcement approaches to help ensure compliance with ethics provisions differed widely among the four states we reviewed. In these states, enforcement involved a variety of officials and organizations, such as the department or agency that contracted for services, ethics commissions, legislative and state auditors, inspectors general, and attorneys general. In Maryland, for example, the state placed representatives from the Attorney General’s office in major state agencies to provide technical assistance and help ensure that state agencies comply with applicable contracting policies. Two of the four states lacked enforcement elements that officials in those states believe are necessary to help ensure compliance with applicable ethics policies. In Massachusetts, the state Inspector General believes that social services contracting has a high level of risk, often associated with unfair contractor advantages, conflicts of interest, and personal gain through public office. According to officials from the Attorney General’s office, program staff have sometimes been ineffective in enforcing compliance with applicable ethics provisions. As a result, the Attorney General has had to prosecute contractors for violations of state ethics laws that Attorney General representatives believe could have otherwise been prevented. Arkansas lacks a statewide mechanism to enforce and resolve allegations of unethical activity. Unlike Massachusetts, the Attorney General in Arkansas does not have statewide responsibility to investigate illegal activities associated with state contracting. Instead, prosecuting attorneys in each county may investigate and resolve allegations associated with state contracting. Moreover, the Director of Arkansas’ Ethics Commission said the Commission has very narrow enforcement responsibilities as well. The Commission focuses predominantly on campaign finance issues and is not involved with monitoring the contracting process. The lack of some states’ ethics provisions may result in conflicts of interest that adversely influence state contract award processes. According to Arkansas and Massachusetts officials we interviewed, these situations have arisen in their states. Arkansas has contracted out the full range of child support enforcement services, including locating absent parents and collecting support payments, in selected counties. Arkansas has contracted with an established network of providers, some employees of whom had formerly worked for the state’s child support enforcement program. According to the state’s child support enforcement General Counsel, the lack of a comprehensive ethics policy undermined potential contractors’ confidence in the fairness of the contracting process. As a result, organizations that had not competed before were discouraged from submitting proposals. This situation, in turn, left the state with no choice but to contract with organizations with which it had long-standing relationships. At the same time, allegations have surfaced regarding the influence exerted by a state legislator to have a child support enforcement full-service contract awarded to an organization in which the legislator has a financial interest. In Massachusetts, state employee conflicts of interest had some adverse impact in contracting supported by TANF block grant funds. Officials in the Department of Transitional Assistance who administer TANF-funded programs had to recompete a contract because state employees were found to have a conflict of interest with respect to one of the competing contractors. Under similar circumstances, the state also had to terminate a contract that had previously been awarded. Final resolution of both these ethics issues required the state to award the contract at a time later than originally anticipated. ABA and Common Cause, a nonpartisan organization that studies government policies, have developed comprehensive model laws that address state ethics policies related to open and fair contracting and include restrictions regarding postemployment activities, conflicts of interest, and other safeguards. States seeking to strengthen their ethics policies may adopt the provisions included in these model laws. Although states are contracting extensively for child support enforcement and TANF-related services, federal laws for these two programs, as recently amended by the Personal Responsibility and Work Opportunity Reconciliation Act of 1996, do not require that states establish or comply with ethics policies like those in ABA’s model law. This is not true, however, with respect to Medicaid. The Congress incorporated conflict-of-interest provisions into state Medicaid plan requirements in 1979, when legislation was enacted authorizing greater use of health maintenance organizations. As a condition of state participation in Medicaid, states must have or enact provisions that require anyone involved in Medicaid-related contracting to be subject to conflict-of-interest requirements similar to, or at least as stringent as, those applicable to federal employees. The federal ethics provisions applicable to Medicaid also include employment restrictions and prohibitions on employees knowingly participating personally and substantially in matters in which they, family members, or certain business associates have a financial interest. More recently, section 4724(c) of the Balanced Budget Act of 1997 (P.L. 105-33) broadened the Medicaid state plan requirement to include additional conflict-of-interest safeguards. Specifically, it required states to have in place restrictions at least as stringent as those applicable to the federal contracting process related to the disclosure of contractor bid, proposal, and source selection information that might undermine open and fair competition. The Medicaid provisions allow states to tailor their ethics policies to their specific circumstances, relying on model laws and other enforcement approaches as they so choose, and offer some assurance that basic safeguards will be in place when a state is contracting for Medicaid services. Several states have established practices to help ensure that contract awardees are held accountable for program results, which provides added assurance that these states will receive the services for which they paid. These practices include performance measures states use when they assess contractor progress toward achieving program results. However, program officials in most states indicated that they rely on traditional accountability strategies, such as audits, that focus more on compliance with program rules than on results. The Government Performance and Results Act of 1993 and results-oriented state initiatives have helped establish frameworks to better focus program management on accountability for results. In addition to an integrated network of comprehensive ethics policies and enforcement approaches, contracting experts and program managers believe states need effective approaches for holding contractors accountable for program results. Effective accountability mechanisms, while difficult to develop, can help states ensure that they base contract payments on performance. Our earlier reviews of privatization have concluded that managers need to supplement current practices that assess compliance with program rules with a greater focus on results. Our earlier work on social service privatization also found that monitoring contractors’ performance toward achieving program results was among the most challenging aspects of the privatization process. This examination of program accountability found that assessing compliance with program requirements, while a significant component of accountability, can constrain the available resources state auditors are able to apply toward assessing longer-term program results. Faced with these priorities and related resource constraints, officials in Texas’ child support enforcement program, for example, have relied on compliance reviews of administrative processes and other approaches in an effort to monitor performance relative to results specified in applicable contracts. In recent audit cycles, the state’s auditors have reviewed compliance with allowable expenditures and reporting requirements. Assessing program results can play a critical role in reviewing contractor performance. Such assessments could incorporate various techniques, such as monitoring outcomes and reviewing qualitative information. In Maryland’s oversight of its TANF-supported welfare-to-work programs, for example, the state has developed a planning process that sets forth long-term goals and objectives for its Department of Human Resources—which administers TANF—and each program it manages and oversees. In addition, program officials, through Strategic Management Assessment Review Teams, periodically assess progress providers have made toward achieving program results, such as program enrollment and completion, employment, and job retention. Generally, contractors are paid on the basis of their performance in each of these program dimensions. Assessing program results enables states to determine whether contractors have in fact achieved intended outcomes. Under the Results Act, HHS developed a framework for establishing performance measures and assessing program results in the child support enforcement program. HHS’ Office of Child Support Enforcement (OCSE), in conjunction with the states, established a 5-year strategic plan that included program goals and performance measures for evaluating the magnitude of increases in paternities established, support orders obtained, and collections received. OCSE and the states developed these measures after considering key dimensions indicative of state performance in providing child support enforcement services. Subsequently, these and other measures were included in modifications to the program’s incentive funding structure. Such frameworks can enhance state strategies to improve accountability for program results in privatized social service programs supported with federal funds. Beyond the Results Act requirements applicable to federally administered programs, some states, such as Oregon and Minnesota, established their own strategies for assessing program results. Toward this end, state legislatures or executive branch agencies have developed program goals and measures for assessing performance. Moreover, one recent study concluded that 47 states have established performance-based budgeting systems intended to improve the effectiveness of state programs. These state initiatives, combined with a greater orientation toward program results in HHS, provide additional management tools that can be used to optimize the anticipated benefits from privatizing child support enforcement, welfare-to-work, and other social service programs. Social service contracting presents many significant challenges to state governments, including the need to achieve competitive contracting and accountability for program results. These challenges, coupled with the magnitude of federal funds that support privatized social service programs, amplify the call for adequate protections against ethics violations that can potentially undermine competition. While our work in selected states suggests that contract awards were not related to the “revolving door,” there is room to strengthen state ethics policies and enforcement approaches to help strengthen open and fair competition. Without comprehensive ethics policies and effective enforcement approaches intended to safeguard competitive contracting, states may not benefit as fully from competition when they privatize social services. Similarly, an insufficient capacity to assess progress toward achieving program results weakens state assurances that contractors will provide federally funded services efficiently and effectively. Faced with these challenges, states can take steps to mitigate threats to competition. By relying on comprehensive models for guidance, states can develop or refine their ethics policies and adopt effective enforcement approaches to strengthen competition in privatized social services. States have been required by statute, in fact, to adopt and apply certain conflict-of-interest requirements to state officials with regard to Medicaid. While the Results Act provides a framework for reorienting program management toward accountability for results, states could take additional measures to help ensure that they obtain desired results from their contracting efforts. Together, fortified ethics policies, effective enforcement approaches, and accountability strategies focused on program results can optimize the states’ capacity to achieve the benefits of social service privatization. We received comments on a draft of this report from HHS, the four states in which we conducted detailed work, and a recognized expert in social service privatization. The comments generally concurred with our findings and conclusions. We also received a number of technical comments that we incorporated where appropriate. We are providing copies of this report to the Honorable Donna E. Shalala, the Secretary of HHS; and the Honorable Olivia A. Golden, HHS’ Assistant Secretary for Children and Families. We will also send copies to state child support enforcement and TANF directors and to other interested parties on request. If you or your staffs have any questions about this report, please contact David D. Bellis, Assistant Director, or Mark E. Ward, Senior Evaluator, at (202) 512-7215. Other major contributors are Gregory Curtis, Joel I. Grossman, Craig H. Winslow, and James P. Wright. This appendix provides additional details on the methods we used to meet the objectives of our study. To help us understand state ethics laws and their enforcement, we reviewed GAO reports, journal articles, and studies on contracting, as well as state ethics laws and policies. To estimate the extent of national movement by former state employees to positions at social service contractors, we obtained information from federal and state program managers in the child support enforcement and Temporary Assistance for Needy Families (TANF) programs. We supplemented these data by interviewing officials of public employee unions and other organizations. In addition, we interviewed state government officials in four states to determine how their states responded to the loss of personnel and the impact this loss had on state programs. To aid us in determining the extent to which state employees left government positions for employment with contractors and the effect this movement had on contract awards, we examined the proposals submitted in response to eight recently issued requests for proposal (RFP) in the four selected states. We selected two full-service child support enforcement RFPs—one in Arkansas and one in Maryland—and two child support enforcement RFPs for automated systems—one in Massachusetts and one in Texas. We also chose one TANF welfare-to-work RFP in each of the four states. We reviewed all proposals submitted in response to RFPs for these contracts to identify former government employees who had worked in either state child support enforcement or welfare-to-work programs and were subsequently listed as key personnel designated to perform specific functions in direct support of the contract, pending selection of contract awardees. Sometimes states awarded more than one contract for each RFP. In addition, the projected contract costs among the contracts we reviewed varied widely. To supplement the information we obtained from our review of proposals, we interviewed state officials to obtain their perspectives on how the movement of former state employees to organizations competing for contracts affected contract awards. We did not evaluate the merits of state contract award decisions, nor did we independently assess whether states or contractors complied with applicable ethics policies. We examined state ethics laws, policies, and enforcement approaches and their federal counterparts to determine the extent to which state ethics laws and policies parallel generally accepted ethics standards, as defined by the American Bar Association, contracting experts, and others. We also interviewed state officials to identify any allegations of state ethics violations and their resolution. In addition, we examined state and federal policies and practices for holding contractors accountable for program results. We also interviewed state program officials in the four selected states to identify the practices they used to hold contractors accountable for program results. Finally, we interviewed Department of Health and Human Services officials regarding their oversight of state and local social service contracting in the context of applicable federal policies. We focused on the child support enforcement and TANF programs in four states—Arkansas, Maryland, Massachusetts, and Texas. We selected these two programs because each receives a significant level of federal funds and each makes widespread or long-term use of contracting. We chose these four states because they offered variation in the strength of their respective ethics provisions. In addition, these four states were using contractors to provide child support enforcement services or to design related automated systems. All four states contracted out TANF-funded welfare-to-work services. Table I.1 summarizes the selected states, number of proposals submitted in response to each RFP, and number of contracts awarded. Welfare Reform: States Are Restructuring Programs to Reduce Welfare Dependence (GAO/HEHS-98-109, June 18, 1998). Child Support Enforcement Privatization: Challenges in Ensuring Accountability for Program Results (GAO/T-HEHS-98-22, Nov. 4, 1997). Social Service Privatization: Expansion Poses Challenges in Ensuring Accountability for Program Results (GAO/HEHS-98-6, Oct. 20, 1997). Managing for Results: Analytic Challenges in Measuring Performance (GAO/HEHS/GGD-97-138, May 30, 1997). Welfare Reform: Three States’ Approaches Show Promise of Increasing Work Participation (GAO/HEHS-97-80, May 30, 1997). Privatization: Lessons Learned by State and Local Governments (GAO/GGD-97-48, Mar. 14, 1997). Child Support Enforcement: Early Results on Comparability of Privatized and Public Offices (GAO/HEHS-97-4, Dec. 16, 1996). Child Support Enforcement: Reorienting Management Toward Achieving Better Program Results (GAO/HEHS/GGD-97-14, Oct. 25, 1996). Employment Training: Successful Projects Share Common Strategy (GAO/HEHS-96-108, May 7, 1996). District of Columbia: City and State Privatization Initiatives and Impediments (GAO/GGD-95-194, June 28, 1995). Welfare to Work: Measuring Outcomes for JOBS Participants (GAO/HEHS-95-86, Apr. 17, 1995). Office of Government Ethics: Need for Additional Funding for Regulation Development and Oversight (GAO/T-GGD-92-17, Mar. 4, 1992). Ethics Enforcement: Process by Which Conflict of Interest Allegations Are Investigated and Resolved (GAO/GGD-87-83BR, May 21, 1987). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on states' social service contracting, focusing on: (1) the extent to which state government employees have moved to positions at social service contractors and the impact such movement has had on the management of publicly provided social services; (2) determining the relative success in winning contracts by contractors who hired state employees and contractors who did not; (3) state ethics laws, policies, and enforcement approaches that address the employment of former state employees and other related issues; and (4) state practices for holding contractors accountable for achieving program results through contracted services. GAO noted that: (1) since 1993, 11 of 42 state child support enforcement directors who left their government positions accepted managerial positions with contractors providing child support enforcement services, according to federal and state program officials; (2) similarly, since 1993, federal and state officials indicated that 10 of the 41 high-level Temporary Assistance for Needy Families (TANF) managers who left state services accepted positions with social service contractors; (3) when the four states GAO examined lost child support enforcement and TANF managers and other staff, officials indicated that they experienced short-term difficulties because they were required to train staff selected to fill the managerial vacancies; (4) although, nationwide, these 21 directors and managers left the government to accept positions with social service contractors, GAO's review of 59 contract proposals in four states found that proposals listing former state employees as key personnel did not result in contract awards any more frequently than did proposals not listing such employees; (5) this was the case for both the child support enforcement and TANF-related programs; (6) GAO's analysis also showed that proposals listing former employees from the same state in which the bidding took place resulted in contracts about as frequently as did proposals not listing such employees; (7) most states have established some ethics policies designed to help ensure open and fair contracting by adopting provisions determined by the American Bar Association (ABA) and other organizations to be critical in prohibiting certain postemployment practices and conflicts of interest; (8) however, more than one-third of the states have ethics policies that lack one or more of these provisions; (9) among the four states GAO examined, enforcement approaches to help ensure compliance with applicable ethics provisions differed widely; (10) to address these inconsistencies, model laws prepared by ABA and others offer possible frameworks for strengthening state ethics policies; (11) once contracts have been awarded, several states have instituted mechanisms aimed at holding contractors accountable for program results; (12) these mechanisms include measures states apply when they assess contractor performance; and (13) while these states have established practices to assess contractor progress toward achieving program results, many others generally rely on basic accountability measures that focus on compliance with program rules rather than on results. |
With the terrorist attacks of September 11, 2001, the threat of terrorism rose to the top of the country’s national security and law enforcement agendas. In response to these growing threats, the Congress passed and the President signed the Homeland Security Act of 2002, which created DHS. We have previously identified IT management as critical to the transformation of the new department. Not only does DHS face considerable challenges in integrating the many systems and processes that provide management with information for decision making, but it must sufficiently identify its future needs in order to build effective systems that can support the national homeland security strategy in the coming years. To jump start this planning process and also begin to identify opportunities for improved effectiveness and economy, OMB issued two memorandums in July 2002 to selected agencies telling them to “cease temporarily” and report on new IT infrastructure and business systems investments above $500,000. On March 1, 2003, DHS assumed operational control of nearly 180,000 employees from 22 incoming agencies and offices. In establishing the new department, the Congress articulated a seven-point mission for DHS: Prevent terrorist attacks within the United States. Reduce the vulnerability of the United States to terrorism. Minimize the damage and assist in the recovery from terrorist attacks. Carry out all functions of entities transferred to the department, including by acting as a focal point regarding natural and man-made crises and emergency planning. Ensure that the functions of the agencies within the department that are not directly related to securing the homeland are not diminished or neglected. Ensure that the overall economic security of the United States is not diminished by efforts aimed at securing the homeland. Monitor connections between illegal drug trafficking and terrorism, coordinate efforts to sever such connections, and otherwise contribute to efforts to interdict illegal drug trafficking. To help DHS accomplish its mission, the Homeland Security Act of 2002 establishes four mission-related directorates, the (1) Border and Transportation Security directorate, (2) Emergency Preparedness and Response directorate, (3) Science and Technology directorate, and (4) Information Analysis and Infrastructure Protection directorate. In addition to these directorates, the U.S. Secret Service and the U.S. Coast Guard remain intact as distinct entities within DHS; Immigration and Naturalization Service adjudications and benefits programs report directly to the deputy secretary as the Bureau of Citizenship and Immigration Services, and the Management directorate is responsible for budget, human capital, and other general management issues. According to the most recent President’s budget, DHS expects to make about $4 billion in IT investments in fiscal year 2004—the third largest IT investment budget in the federal govenment. In addition, as we have testified, information management and technology are among the critical success factors that the new department should emphasize in its initial implementation phase. For example, DHS currently has several ongoing IT projects that are critical to the effective implementation of its mission, such as the Integrated Surveillance Intelligence System, which is to provide “24 by 7” border coverage through ground-based sensors, fixed cameras, and computer-aided detection capabilities; Student Exchange Visitor Information System, which is expected to manage information about nonimmigrant foreign students and exchange visitors from schools and exchange programs; Automated Commercial Environment project, which is to be a new trade United States Visitor and Immigrant Status Indicator Technology (US- VISIT), a governmentwide program intended to improve the nation’s capacity for collecting information on foreign nationals who travel to the United States, as well as control the pre-entry, entry, status, and exit of these travelers. Moreover, as all of the programs and agencies are brought together in the new department, it will be an enormous undertaking to integrate their diverse communication and information systems. Among the IT challenges that the new department will have to face and overcome are developing, maintaining, and implementing an enterprise architecture, and establishing and enforcing a disciplined IT investment management process (which includes establishing an effective selection, control, and evaluation process). The department’s ability to overcome these challenges is complicated by the IT management problems that its major components had when they transferred to DHS. Specifically, as we previously reported, we still have numerous outstanding IT management recommendations that require action at component agencies, such as the Customs Service and the Coast Guard. Figure 1 illustrates the timing of OMB’s July 2002 memorandums. These memorandums instructed selected agencies to (1) cease temporarily new IT infrastructure and business systems (i.e., financial management, procurement, and human resources systems) investments above $500,000 pending a review of the investment plans of all proposed DHS component agencies; (2) identify and submit to OMB information on any current or planned spending on these types of initiatives; and (3) participate in applicable IT investment review groups co-chaired by OMB and the Office of Homeland Security. According to OMB, its goal in issuing these memorandums was to seek opportunities for improved effectiveness and economy. In addition, according to officials from OMB’s Office of Information and Regulatory Affairs, another purpose was to obtain an inventory of current and planned IT infrastructure and business systems investments for organizations to be moved to DHS, which was expected to help in the administration’s transition planning. Although OMB directed selected agencies to temporarily cease these investments, it did not necessarily mean that work was to be stopped on all IT infrastructure and business systems projects at the applicable agencies. First, the memorandums only pertained to funding for new development efforts and not to existing systems in a “steady state” using operations and maintenance funding. Second, the cessation did not apply if funds pertaining to a development or acquisition contract had already been obligated. Third, agencies could request an expedited review to obtain the approval to proceed if they had an emergency or critical need. The following are examples of how OMB’s direction to temporarily cease IT investments would apply in certain circumstances. If an agency had an existing procurement system in a steady state in which no major modifications or modernization efforts were planned, there would have been no effect on the funding of this system. If an agency had an ongoing contract with available obligations for the development of a financial management system, there would have been no effect on this contract, but new obligations for development or modernization efforts would have been required to be approved by the review group. If an agency wanted to award a contract over $500,000 for a new or modernized IT infrastructure item such as a local area network, it would have been required to obtain approval from the investment review group before proceeding. Our testimony of October 2002, stated that it was not possible to assess the full effect of the July memorandums on the selected agencies at that time. Except for emergency requests, according to representatives from OMB’s Office of Information and Regulatory Affairs, the review group had not taken any action at the time of our review on the agencies’ submissions in response to the July memorandums because neither they nor OMB had completed their reviews of these documents. The July memorandums called on the Homeland Security IT Investment Review Group to assess individual IT investments as part of considering whether to consolidate or integrate component agency efforts. In fulfilling this role, the review group relied on an informal process, which was not documented. Although the review group reviewed the few investments that component agencies submitted, according to OMB and DHS IT officials, the group generally addressed broader issues related to the transition to the new department. In particular, these officials noted that the review group concentrated on longer term IT strategic issues, such as those related to the development of an enterprise architecture, associated with the transition to the proposed department. The investment review group was tasked with (1) reviewing component agency IT investment submissions that met the criteria in the memorandums, and (2) making recommendations related to these submissions, including looking for opportunities to consolidate and integrate component agency investments. According to OMB IT representatives, the group generally met once a week but did not have a documented process for performing reviews of the few component agency investments that were submitted for review. These officials reported that in the review process that was implemented, (1) agencies requested approval of selected IT investments, (2) OMB and the investment review group reviewed the agency submission, and (3) the review group made a recommendation. Once this recommendation was made, the normal budget execution process was implemented. Moreover, according to these representatives, the investment review group used the principles contained in section 300 of OMB Circular A-11 and section 8(b) of OMB Circular A-130 as the criteria for evaluating submitted investments. In addition, in commenting on a draft of this report, representatives from OMB’s Office of Information and Regulatory Affairs and Office of the General Counsel stated that although the activities of the Homeland Security IT Investment Review Group were generally conducted on an informal basis, the group relied on the already-existing processes documented in these circulars to fulfill its responsibilities. According to OMB IT representatives, when the establishment of DHS became closer in time, the focus of the review group shifted from reviewing individual investments to addressing the IT strategic issues involved with establishing the department. In particular, according to DHS officials, the review group created six working groups to address, respectively, business architecture, networks, information security, Web management, directory services (e.g., e-mail capability), and technical reference model issues. In addition, according to these officials, the investment review group took into account transition work being performed by other entities. For example, the review group worked with a liaison from the Chief Financial Officers Council, which was looking at financial management system matters related to the new department. The July 2002 memorandums resulted in some changes to agency IT infrastructure and business systems investments. Specifically, according to OMB and DHS IT officials, the review group recommended approval with conditions the five IT investments submitted to it and four component agencies reported that they changed other initiatives as a result of the memorandums. However, it is not known whether, or the extent to which, savings have resulted from the memorandums. In particular, OMB did not track the savings associated with the July memorandums because, according to OMB IT representatives, budgetary savings had not occurred when the review group was in place. Nevertheless, OMB and DHS IT officials cited other benefits that resulted from the memorandums, such as the identification of ongoing component agency efforts or resources that were important to the operation of the department at its inception. Four component agencies submitted five IT investment requests to be reviewed by the review group. According to OMB and DHS IT officials, all of these requests were recommended for approval with conditions. In addition, four component agencies reported that on their own initiative that they terminated, delayed, or changed other initiatives as a result of the July memorandums. (See table 1.) The July memorandums stated that initial estimates indicated that potential savings of between $100 million and $200 million (IT infrastructure) and $65 million and $85 million (business systems) could be achieved over a 2- year period as a result of consolidating and integrating component agency investments. OMB reported to congressional committees that these estimates were based primarily on best practices in the federal government and private industry. However, an OMB IT representative stated that these estimates were a rough approximation and that no documentation existed to support how they were derived. The July memorandums also stated that the review group would track these savings. Moreover, OMB reported to congressional committees that this tracking would include a breakout of the savings, the cause of the savings, and the time period in which the savings would be generated. However, a tracking process was not established because, according to an OMB IT representative, no budgetary savings had occurred at the time that the investment review group was in place since no investment was terminated by the group. According to this representative, OMB still believes that budgetary savings will occur and expects that DHS will track these savings. Moreover, this representative stated that OMB will be actively working with DHS as part of its budgetary and management processes to ensure that such savings occur. DHS’s CIO agreed that savings are expected to result from the department’s consolidation and integration of systems. Moreover, he stated that DHS will be tracking such savings and has established a mechanism for doing so. Specifically, the CIO pointed to DHS’s establishment of IT commodity councils—groups that are responsible for a collection of related materials or services—that would perform this function. According to the Director of Strategic Sourcing and Acquisition Systems, the councils have established project teams that are responsible for tracking savings. According to this official, each project is in the process of developing their project plans, departmental requirements, and savings targets. Until savings resulting from the consolidation and integration of systems and services are identified, tracked, and reported, it will remain unknown whether OMB’s July memorandums and the subsequent establishment of DHS have achieved the potential economies identified by OMB. In addition, DHS IT officials stated that they were not aware of any plans to report budgetary savings resulting from the consolidation and integration of systems to applicable congressional committees. Such savings information is an important element for the Congress to consider when deliberating DHS budget requests and overseeing its IT management. Moreover, the Chairman of the House Committee on Government Reform has previously expressed concern that there has been a tremendous push for additional IT spending at DHS component agencies without ensuring appropriate management or accountability. Although budgetary savings have not yet been identified, DHS IT officials, including the CIO, cited other benefits to the July memorandums. In particular, DHS IT officials estimated that several million dollars in costs have been avoided as a result of the Secret Service decision. (A Secret Service IT official provided an explanation of how this estimate was derived, but we could not validate this amount because it was not clearly supported by the documentation provided.) In addition, the CIO stated that the investment review group evolved into the department’s CIO Council, which is responsible for developing, promulgating, implementing, and managing a vision and direction for information resources and telecommunications management. Further, the DHS chief technology officer reported that the review group provided the new department with a head start on day one operations by, for example, deciding to use the Immigration and Naturalization Service’s network backbone for the department. Finally, these and DHS component agency IT officials stated that the memorandums facilitated the department’s long-term IT planning efforts, including the development of an enterprise architecture. Once DHS became operational and the investment review group established by the July memorandums no longer existed, the department established an IT investment management process that includes departmental reviews of component agency IT investments meeting certain criteria. As part of the selection phase of this process, DHS’s CIO reported that he approved the department’s IT portfolio as part of the fiscal year 2005 budget cycle. In addition, as of January 26, 2004, the department’s highest level investment management board had performed control reviews of nine investments that had reached key decision points. In each of these cases, the project was allowed to proceed although additional documentation was required and/or conditions were set. Finally, the department’s investment management process is still evolving as the department attempts to deal with a large number of IT investments eligible for departmental reviews. In May 2003, DHS issued an investment review management directive and IT capital planning and investment control guide, which provide the department’s entities with requirements and guidance on documentation and review of IT investments. In particular, the management directive establishes four levels of investments, the top three of which are subject to review by department-level boards—the Investment Review Board (IRB), Management Review Council, and Enterprise Architecture Board. Appendix I provides a description of these department-level boards and the investments that they are responsible for. The directive also establishes a five-phase acquisition process that calls for these investments to be reviewed at key decision points, such as program authorization. In addition, the IT capital planning and investment control guide lays out a process for selecting, controlling, and managing investments. Figure 2 provides an overview of the review process outlined in the management directive and capital planning and investment control guide. As part of the selection phase of its capital planning and investment control process, DHS reviewed component agency IT investments for its fiscal year 2005 budget submission. Specifically, according to DHS IT officials, (1) the CIO approved the department’s IT portfolio and (2) all of the major IT systems submitted to OMB for the fiscal year 2005 budget were assessed and scored by an investment review team. In addition, beginning in May 2003, DHS’s top-level board (the IRB) began reviewing the department’s highest priority projects. As of January 26, 2004, the department had performed 12 control reviews of nine investments. Table 2 summarizes the results of these reviews. Although DHS is making progress in reviewing component agency projects, its investment management process continues to evolve. In particular, as of January 2, 2004, the department had identified about 100 IT programs that were eligible for review by its two top-level departmental boards and, according to IT officials, is having difficulty in bringing all of these programs before the boards in a timely manner. Moreover, DHS has not established a process to ensure that control reviews of component agency IT investments are performed in a timely manner. Specifically, although DHS’s capital planning and investment control guide states that the Office of the CIO will maintain a control review schedule for all initiatives in the department’s IT investment portfolio, as of January 2, 2004, this schedule has not been developed. According to the DHS IRB coordinator and IT officials, DHS has requested information from its component entities related to the schedules and priorities of its level 1, or top-level, investments. These officials stated that such information can then be used to develop a master milestone calendar for control reviews. Control review schedules, or master milestone calendars, are important to ensure that DHS is reviewing its highest priority IT investments in a timely manner so as to be able to affect changes to component agency approaches or even terminate a poorly managed or strategically unnecessary investment, if appropriate. DHS’s CIO also stated that the department’s CIO Council is developing a peer review process for major IT projects that is expected to include defining a life-cycle management process and a quarterly reporting process. The CIO stated that the new process is expected to be instituted by the end of March 2004. OMB took a prudent step in issuing its July memorandums directing federal agencies that were expected to be part of the new department to temporarily cease funding for new IT infrastructure and business systems investments in anticipation of the establishment of DHS. Although documentation of the implementation of the memorandums was lacking, OMB and DHS IT officials outlined an approach that included both reviewing specific IT investments and the beginning of planning for the transition to the new department. Further, DHS component agencies identified actions that they took, such as putting initiatives on hold, and other benefits that resulted from the memorandums. Nevertheless, according to OMB IT representatives, budgetary savings as a result of the July memorandums had not occurred at the time that the review group was in place. Although DHS has begun to establish a mechanism to track such savings in the future, until savings resulting from the consolidation and integration of systems and services are identified, tracked, and reported, it will remain unknown whether OMB’s July memorandums and the subsequent establishment of DHS have achieved the millions of dollars in potential economies identified by OMB. The Congress would benefit from such information in its deliberations on the department’s budget and in its oversight of DHS’s management of IT. Finally, DHS has begun to perform high-level oversight of component agency IT investments, although much remains to be accomplished and the process for this oversight is still evolving. Accordingly, DHS continues to face challenges in providing robust and constructive oversight of component agency IT investments. A significant challenge remaining is determining the current status and upcoming major milestones of IT investments subject to departmental review in order to schedule timely control reviews. To demonstrate its progress in consolidating and integrating its systems and services, we recommend that the Secretary of Homeland Security direct the Chief Information Officer to periodically report to appropriate congressional committees, the budgetary savings that have resulted from the department’s IT consolidation and integration efforts, including a breakout of the savings, the cause of the savings, and the time period in which the savings have been, or will be, generated. To ensure that IT investments subject to departmental review undergo timely control reviews, we recommend that the Secretary of Homeland Security direct the Chief Information Officer to develop a control review schedule for IT investments subject to departmental oversight (i.e., level 1, 2, and 3 investments). We received oral comments on a draft of this report from OMB and DHS. Representatives from OMB’s Office of Information and Regulatory Affairs and Office of the General Counsel generally agreed with the findings of the report. These representatives also provided a technical comment that we included in the report, as appropriate. In addition, DHS’s Office of the CIO capital planning and investment control officials stated that the report was factually accurate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security and the Director, Office of Management and Budget. Copies will also be available at no charge on the GAO Web site at www.gao.gov. If you have any questions on matters discussed in this report, please contact me at (202) 512-9286 or Linda J. Lambert, Assistant Director, at (202) 512-9556. We can also be reached by e-mail at pownerd@gao.gov and lambertl@gao.gov, respectively. Another key contributor to this report was Niti Bery. Deputy Secretary (Chair) Chief Information Officer (CIO) This board also reviews all level 1 and 2 IT investments and makes recommendations to the Investment Review Board and Management Review Council DHS also plans to employ a Joint Requirements Council to serve as a working group to make recommendations to the Investment Review Board and Management Review Council on cross-cutting IT investments. The Joint Requirements Council, whose membership includes the Chief Technology Officer, Director of Strategic Sourcing and chief operating officers of DHS’s component entities, met for the first time on January 7, 2004. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | In July 2002, the Office of Management and Budget (OMB) issued two memorandums directing agencies expected to be part of the Department of Homeland Security (DHS) to temporarily cease funding for new information technology (IT) infrastructure and business systems investments and submit information to OMB on current or planned investments in these areas. GAO was asked to (1) explain OMB's implementation of these memorandums, (2) identify any resulting changes to applicable IT investments, and (3) ascertain if DHS has initiated its own investment management reviews and, if so, what the results of these reviews have been. The July 2002 memorandums established an investment review group cochaired by OMB and the Office of Homeland Security to review submitted investments and estimated that millions of dollars potentially could be saved as a result of consolidating and integrating component agency investments. The investment review group relied on an informal, undocumented process to fulfill its responsibilities. Nevertheless, according to OMB and DHS IT officials, the review group both reviewed five component agency investments that were submitted and addressed long-term IT strategic issues related to the transition to the new department. OMB and DHS IT officials cited some changes to agency IT infrastructure and business systems investments because of the July memorandums. In addition, DHS IT officials cited other benefits that resulted from the memorandums. However, it is not known whether, or the extent to which, savings have resulted from the memorandums. In particular, OMB did not track savings associated with the July memorandums because, according to OMB IT staff, anticipated budgetary savings had not occurred at the time the review group was in place. DHS's chief information officer stated that the department plans to track savings related to the consolidation and integration of systems and has established a mechanism for doing so. However, until such savings are identified, tracked, and reported it will remain unknown whether the July memorandums and the subsequent establishment of DHS have achieved the potential economies identified by OMB. Once DHS became operational and the investment review group no longer existed, the department established its own IT investment management process, which is still evolving. As part of this process, between May 2003 and late January 2004, the DHS's highest level investment management board performed reviews of nine investments that had reached key decision points. Even with this progress, the department has identified about 100 IT programs that are eligible for review by its two top department-level boards. However, DHS has not established a process to ensure that key reviews of such IT investments are performed in a timely manner. |
The two main fee basis care delivery methods—preauthorized care and emergency care—are approved using two different processes.Preauthorizing fee basis care involves a multistep process conducted by the VAMC that regularly serves a veteran. The preauthorization process is initiated by a VA provider who submits a request for fee basis care to the VAMC’s fee basis care unit, which is an administrative department within each VAMC that processes VA providers’ fee basis care requests and verifies that fee basis care is necessary. Once approved by the VAMC Chief of Staff or his or her designee, the veteran is notified of the approval and can choose any fee basis provider willing to accept VA payment at predetermined rates. (See fig. 1.) VA’s fee basis care spending increased from about $3.04 billion in fiscal year 2008 to about $4.48 billion in fiscal year 2012. VA’s fee basis care spending hit its highest level of about $4.56 billion in fiscal year 2011. Across the 5 fiscal years we reviewed, VA spent a total of about $20.29 billion on fee basis care. (See fig. 4.) The overall increase in fee basis care spending from fiscal year 2008 to fiscal year 2012 can be attributed to increases in the number of veterans who received fee basis care. The slight decline in fee basis care spending between fiscal year 2011 and fiscal year 2012 is likely due to VA’s adoption of Medicare rates for its fee basis care program. Medicare reimbursement rates are typically lower for most health care services than VA’s previous fee basis care reimbursement rates. VA’s fee basis care utilization also increased from fiscal year 2008 to fiscal year 2012—although the number of unique veterans receiving care from fee basis providers has increased less rapidly for the last 3 fiscal years. fee basis providers from fiscal year 2008 to fiscal year 2012.basis care utilization hit its highest point of the 5-year period in fiscal year 2012 when about 976,000 unique veterans received care from fee basis providers, about 155,000 more veterans than in fiscal year 2008. (See fig. 5.) According to VA officials, the increase in the number of unique veterans receiving care from fee basis providers from fiscal year 2008 to fiscal year 2012 was likely due to VA’s use of fee basis care to meet goals for the maximum amount of time veterans wait for VAMC-based appointments. Fee basis utilization in this report does not include pharmacy-only fee basis care. Pharmacy-only fee basis care consists of reimbursements to veterans for medications that they paid for as part of emergency care that was reimbursed by VA under 38 U.S.C. § 1728 (emergency care generally for service-connected conditions) or 38 U.S.C. § 1725 (Veterans Millennium Health Care and Benefits Act emergency care for non-service-connected conditions). A total of 11,750 unique veterans received such reimbursement from fiscal year 2008 through fiscal year 2012. To determine the fee basis utilization numbers in this figure, we counted each individual veteran once per fiscal year by identifying the number of unique Social Security numbers present in each fiscal year’s fee basis data. Utilization totals in this figure have been rounded to the nearest thousand veterans. In addition, fee basis care spending and utilization also varied by fee basis care category. (See app. I for more information.) VA spent about $11.22 billion on outpatient fee basis care and about $9.07 billion on inpatient fee basis care from fiscal year 2008 through fiscal year 2012. During this 5-year period, inpatient fee basis care spending increased steadily from about $1.41 billion in fiscal year 2008 to about $2.15 billion in fiscal year 2012, while outpatient fee basis care spending declined in fiscal year 2012. As a result, by fiscal year 2012, the difference in inpatient and outpatient fee basis care spending was only about $180 million, compared to a $590 million disparity in fiscal year 2011 when outpatient fee basis care spending was at its highest level. (See fig. 6.) Significantly more veterans received care from outpatient fee basis care providers than from inpatient fee basis care providers. Specifically, from fiscal year 2008 through fiscal year 2012, a total of about 2.35 million unique veterans received care from outpatient fee basis providers and a total of about 472,000 unique veterans received care from inpatient fee basis providers. (See fig. 7.) Preauthorized fee basis care accounted for the majority of outpatient fee basis spending and utilization from fiscal year 2008 through fiscal year 2012—about $7.36 billion and about 80 percent of unique veterans receiving outpatient fee basis care. VA spent the least on outpatient fee basis care for compensation and pension exams and emergency care for service-connected veterans. In addition, preauthorized fee basis care accounted for the majority of inpatient fee basis care spending from fiscal year 2008 through fiscal year 2012. Preauthorized inpatient fee basis care accounted for about $4.60 billion and about 57 percent of unique veterans receiving inpatient fee basis care. VA spent the least on inpatient fee basis care associated with emergency care for service- connected veterans. (See App. II.) During our review of six VAMCs, we identified three common factors that affected these facilities’ utilization of fee basis care—clinical service availability, veteran travel distances, and VA wait time goals. In addition, officials from the six VAMCs reported several methods they use to reduce either the cost of referring veterans to fee basis providers or the number of veterans their facilities refer to fee basis providers. VAMCs have limitations on the services they can offer due to a variety of factors, including the size of their facilities and the types of providers they can recruit, which can affect fee basis care utilization. VA officials from the six VAMCs we examined reported that the types of clinical services offered at their facilities affected veterans’ utilization of fee basis care. For example, officials from the Alexandria VAMC explained that their facility does not meet VA’s requirements for providing orthopedic surgery services to veterans and does not provide these services. As a result, they refer veterans who need these services to fee basis providers. Similarly, officials from the Biloxi VAMC explained that they do not have the ability to offer radiation therapy for cancer treatment at their medical center. As a result, they refer veterans who need radiation therapy to fee basis providers and reported that their VAMC has little control over increases in fee basis spending for these services. In some cases, a VAMC may not be able to provide some services to veterans because they do not have the right mix of clinical specialists to accommodate complications that may arise during surgery. For example, for a VAMC to offer joint replacement surgery, the facility must be equipped with an orthopedic specialist capable of performing the surgery and a number of additional providers to assist in the event of an emergency complication during surgery. For joint replacement surgeries, these additional providers include a thoracic surgeon and a neurologist who both must be available within 15 minutes by phone or 60 minutes in person, if needed. If these criteria for additional providers are not met, the VAMC is not authorized to perform joint replacement surgeries even if the VAMC has an orthopedic specialist capable of performing the surgery. When VAMCs are unable to provide services due to these requirements, they may rely on fee basis care to obtain these services for veterans. In other cases, VAMCs are unable to recruit specialists and as a result cannot offer some clinical services to veterans. When such recruiting challenges arise, VAMCs may rely on fee basis care to ensure veterans can receive medical services. For example, officials from the Las Vegas VAMC explained that they have difficulty recruiting several types of specialists—including vascular surgeons, pulmonologists, and gastroenterologists. These officials explained that they are exploring ways to provide relocation incentives to help recruit specialists; however, they noted recruiting is difficult because recent medical school graduates often want the opportunity to conduct medical research in addition to patient care and the Las Vegas VAMC does not have a research program. The distance that veterans have to travel to receive medical care is also a critical factor influencing whether they are treated in VAMCs or referred to fee basis providers. Traveling long distances for medical care is often impractical for veterans, particularly those receiving ongoing outpatient medical care, such as dialysis or radiation therapy for cancer. The decision about whether a veteran can physically tolerate the travel to a VAMC or should be referred to a fee basis provider in the community is a clinical judgment VA providers make in consultation with VAMC fee basis care unit staff. Officials from all six of the VAMCs we reviewed reported that utilization of fee basis care was affected by the distance that veterans must travel to receive VAMC-based care. For example, the Biloxi VAMC serves veterans from four states along the Gulf Coast, including many who live more than 300 miles from the facility. Officials from the Biloxi VAMC explained that the significant travel distance that some veterans face when traveling from their homes to the VAMC for care is burdensome and may not be appropriate for all veterans. As a result, these officials said that their VAMC frequently refers veterans to fee basis providers within the veterans’ own communities to reduce this burden. Similarly, officials from the Alexandria VAMC explained that many times they also refer veterans to fee basis providers in veterans’ own communities to lessen the travel burden. Another related factor that can increase fee basis utilization involves whether veterans referred to fee basis providers are eligible for reimbursement for travel costs through VA’s beneficiary travel program. VA’s beneficiary travel program reimburses eligible veterans for travel from their home to either their primary VAMC, another VAMC, or to a fee basis provider that can provide the care they need. Under VA’s beneficiary travel program regulations, veterans are eligible for travel reimbursement only if they meet one of several criteria—including having a service-connected disability rating of 30 percent or more, or an annual income below a specified threshold. Therefore, veterans who do not meet these criteria cannot receive reimbursement for travel costs to another VAMC for treatment. Under these circumstances, veterans may see a fee basis provider closer to their residences, according to VA officials, even though fee basis care may cost VA considerably more than the cost of treatment in a VAMC. The Secretary of Veterans Affairs has authority under the beneficiary travel authorizing statute to provide travel reimbursement to additional categories of veterans. However, VAMCs only have the authority to reimburse veterans who meet the eligibility requirements of the beneficiary travel program as outlined in VA regulations. In order to allow VAMCs to reimburse additional veterans for travel, VA would need to revise its regulations to include additional categories of veterans. Officials from one VAMC and a few VISNs we reviewed told us they often send veterans who are not eligible for travel reimbursement to fee basis providers instead of referring them to other VAMCs that can provide the care because VA cannot compensate them for their travel to another VA facility. For example, officials from the Biloxi VAMC explained that the Houston VAMC is able to provide veterans with high-quality interventional cardiology services, such as cardiac catheterization and cardiothoracic surgery, which are not available at the Biloxi VAMC. However, if a veteran is not eligible for beneficiary travel, the Biloxi VAMC will refer them to a fee basis provider to lessen the financial burden on the veteran even though, in some cases, care from a fee basis provider may cost the Biloxi VAMC $30,000 to $40,000 more than if the veteran were treated at the Houston VAMC. Biloxi VAMC officials said that they have asked their VISN to allow them to reimburse additional veterans for travel to the Houston VAMC for services that facility can provide, but they were informed that VA’s beneficiary travel regulations do not permit them to offer travel reimbursements to veterans not eligible for beneficiary travel benefits. As part of the fee basis preauthorization process, VAMC or VISN officials do not evaluate whether it would be less expensive to send veterans to another VAMC for treatment rather than sending them to a fee basis provider. In requesting authorization for a veteran to see a fee basis provider, VAMC providers do not currently include information on the likely costs of the fee basis care. VA’s policy for determining the desired appointment date is unclear. A veteran’s desired appointment date is the date on which the patient or provider wants the patient to be seen. In the case of a veteran new to the system, the desired appointment date should be determined based on the veteran’s preferred appointment date. See GAO, VA Health Care: Reliability of Reported Outpatient Medical Appointment Wait Times and Scheduling Oversight Need Improvement, GAO-13-130 (Washington, D.C.: Dec. 21, 2012). awarded each fiscal year.developing these performance contracts, wait times for care received from fee basis providers are excluded from these performance measures and VISN and VAMC directors’ performance contracts do not include specific goals for wait times for fee basis care. According to VA officials responsible for VA officials from all six VAMCs we reviewed reported that they routinely refer veterans to fee basis providers to help ensure that veterans receive timely care and that their facilities meet performance goals for wait times for VAMC-based care. For example, Biloxi VAMC officials said they refer veterans to fee basis providers to avoid having longer wait times for veterans in VAMC-based clinics that would cause the Biloxi VAMC to fall short of its performance goal for VAMC-based care wait times. Similarly, officials from the Alexandria VAMC explained that their medical center sends veterans to fee basis providers solely to meet the wait time goals for the VAMC. They said that veterans needing treatment in several specialties—including audiology, cardiology, and ophthalmology—are referred to fee basis providers to help the Alexandria VAMC meet its goals for VAMC-based clinic wait times. While serving veterans in a timely way is important and sending them to fee basis providers may provide veterans more timely service, VA does not track how long it takes veterans to be seen by fee basis providers at For example, officials from the Alexandria VAMC explained all VAMCs.that they often refer veterans to fee basis providers when the Alexandria VAMC’s wait times are too long, but fee basis providers in their community also face capacity limitations and may not be able to schedule appointments for veterans any sooner than the VAMC-based provider. Since VA does not require all VAMCs to track wait times for fee basis providers, little is known about how often veterans’ wait times for fee basis care appointments exceed VAMC-based appointment wait time goals. Because VA has no data on wait times for veterans treated by fee basis providers, it is not possible to determine if veterans are receiving comparable access to fee basis providers as veterans receiving care from VAMC-based providers. Efforts to provide either increased capacity or additional VAMC-based health care services have helped VAMCs reduce their utilization of fee basis care, according to officials from all six VAMCs we reviewed. For example, Durham VAMC officials explained that they recently completed an operating room expansion at their facility, which has allowed them to bring more surgical services back into the VAMC and reduce their reliance on fee basis surgical services. These officials also said that the Durham VAMC is preparing to expand its inpatient psychiatric unit by adding six additional inpatient beds, which will reduce their reliance on fee basis providers for treating veterans when VAMC-based psychiatric beds are at capacity. Durham VAMC officials reported that the operating room expansion saves an estimated $18 million annually and the additional six VAMC-based psychiatric unit beds saves the facility an estimated $3.4 million annually. In another case, Biloxi VAMC officials reported that in 2010 they reduced their reliance on fee basis providers for pulmonary function tests by purchasing additional equipment and hiring an additional technician to increase the VAMC-based capacity to provide these tests. As a result, officials have seen a drop in the number of veterans referred to fee basis providers for this service and fee basis costs for pulmonary function tests decreased by about $112,000 between fiscal years 2010 and 2012. Such expansions require a careful analysis of the benefits and costs of the expansion. Before a VAMC expands its capacity, VA requires VAMCs to develop a business case for the expansion as part of VA’s annual consideration of capital investments. These business cases must address several elements—including a financial analysis and safety issues. For example, Durham VAMC officials explained that to make these decisions about expanding its operating room and inpatient psychiatric bed capacity they reviewed weekly fee basis reports that included cost and volume information on the most common services that their VAMC provided through fee basis care and used these reports to make decisions about which Durham VAMC-based services should be expanded. However, some VAMC officials noted that it may not always be more cost effective for VAMCs to provide these services. For example, officials from the Salisbury VAMC explained that they planned to build a VAMC-based dialysis unit to reduce the number of veterans they referred to fee basis providers for dialysis treatments. However, when they compared the cost of building a dialysis unit to the cost of providing veterans dialysis treatments through fee basis providers, they determined that fee basis care was as cost-effective as building a dialysis unit. DOD medical facilities colocated with nearby VAMCs offer an alternative to veterans receiving care from more costly community-based fee basis providers. Currently, VA and DOD have a policy that allows the departments to charge one another at least 10 percent less for clinical services than they would in locations without sharing agreements. As of June 2012, there are nearly 200 active sharing agreements in place between VA and DOD that range in complexity and scope from sharing a single service to agreements that govern the sharing of multiple services. Two of the six VAMCs we reviewed—located in Las Vegas and Biloxi— share resources with neighboring DOD health care facilities to provide lower-cost care to veterans. The Las Vegas VAMC has a sharing agreement with Nellis Air Force Base for some health care services, including cardiac and radiation oncology services. Similarly, the Biloxi VAMC has several sharing agreements with Air Force and Navy medical facilities along the Gulf Coast for some health care services. Officials from both these VAMCs reported that they refer veterans to these DOD facilities before sending them to fee basis providers in the community because the reimbursement rate for services provided in DOD medical facilities is lower than the Medicare rates used to reimburse fee basis providers. Officials from both the Las Vegas and Biloxi VAMCs explained that they first explore whether care is available through sharing agreements with nearby DOD medical facilities before referring the veterans to fee basis providers. Another critical factor affecting fee basis care spending and utilization is the timely transfer of veterans receiving inpatient care from fee basis providers back to VAMC-based care. This is particularly relevant because, as we discussed earlier, VA has spent almost $1.3 billion over the last 5 years on emergency inpatient services for veterans through the Millennium Act and about $4.6 billion on preauthorized inpatient fee basis care. (See app. II.) As a result, it is important that VAMC staff closely monitor the conditions of veterans receiving inpatient care from fee basis providers to ensure that they are transferred to VAMCs once their conditions stabilize. Officials from all six VAMCs we examined reported that transferring veterans being treated by fee basis providers back to VAMC-based care when appropriate can be a way of reducing the cost of inpatient fee basis care. According to officials at two VAMCs, this is because VAMC inpatient bed capacity limitations that required a veteran to be referred to an inpatient fee basis provider can change during the course of a veteran’s hospital stay in non-VA facilities. While officials from all six VAMCs we reviewed noted that transferring veterans back to inpatient VAMC-based care can help reduce fee basis utilization and spending, we found some VAMCs have more robust monitoring methods than other VAMCs for tracking veterans being treated by fee basis providers through their utilization management programs. Specifically, we found the following: Three of the VAMCs we reviewed had a more formal approach that was integrated with their utilization management programs to actively identify circumstances when veterans being treated as inpatients by fee basis providers could return to the VAMC to complete their inpatient care. For example, the Salisbury VAMC has a transfer coordinator program specifically designated to actively identify such circumstances. The Salisbury VAMC employs a nurse case manager who visits veterans during their inpatient stays with fee basis providers and identifies changes in veterans’ conditions that will allow them to return to the VAMC, and coordinates veterans’ transitions back into VAMC-based care. According to Salisbury VAMC officials, this program has allowed them to transfer veterans back to the VAMC to complete their care once the veterans’ conditions have stabilized. In contrast, the other three VAMCs we reviewed had a more passive approach that was limited to tracking veterans’ progress through information given to them by the veterans’ inpatient fee basis providers. For example, the Alexandria VAMC’s transfer coordinator monitors veterans’ inpatient fee basis care; however, this information is rarely used to transfer veterans back to the VAMC to complete their treatment. Although ensuring that VAMCs are incorporating fee basis care into their utilization management programs would enable VA to more efficiently identify when opportunities exist for some veterans to be transferred back to lower-cost VAMC-based care, VA does not currently require all VAMCs to conduct such review. Specifically, VA’s current utilization management policy does not require VAMCs to incorporate reviews of inpatient fee basis care into their VAMC-based utilization management programs.how to most effectively track veterans receiving inpatient care from fee basis providers, which has allowed VAMC programs to take a variety of forms. Ultimately, without guidance and standardized procedures provided by VA Central Office, some VAMCs may not be monitoring veterans receiving inpatient care from fee basis providers in the VA Central Office has not provided guidance to all VAMCs on community closely enough to prevent prolonged and unnecessary stays for veterans in inpatient fee basis care and may be missing other opportunities to reduce fee basis care spending. One of VHA CBO’s three primary methods for monitoring fee basis care spending and utilization is its review of fee basis data. According to VHA CBO officials, these reviews are primarily focused on examining fee basis care utilization and spending—including VISN fee basis care utilization and significant high-cost areas, such as dialysis treatment. Analysis of fee basis data is an important aspect of monitoring that allows VHA CBO staff to look for outliers in spending and utilization, mistakes in fee basis claims data, potential lost opportunities to reduce spending and utilization, and to assess more long-term considerations—such as adjusting the level of fee basis care services or assessing potential areas for VAMC-based service expansion. However, the usefulness of this monitoring method as an oversight tool is significantly limited due to the way fee basis data are collected and reported to the VHA CBO. Currently, VA’s data system collects claims data for each individual service provided by a fee basis provider—such as the physician’s time, surgical procedures, hospital rooms, and laboratory tests—rather than the total cost of a veteran’s office visit or inpatient stay. VA’s current data system cannot group these individual services by episode of care—a combined total of all care provided to a veteran during a single office visit or inpatient stay. For example, during an office visit to an orthopedic surgeon for a joint replacement evaluation, an X-ray for the affected joint may be ordered, the veteran may be given a blood test, and the veteran may receive a physical evaluation from the orthopedic surgeon. The fee basis provider would submit a bill to VA for the office visit and separate bills would be submitted by the radiologist that X-rayed the affected joint and the lab that performed the veteran’s blood test. Each of these bills would include charges under different medical billing codes. The VISN or VAMC-based fee basis clerk processing this claim would record these charges in VA’s basis claims processing software and request payment for these fee basis providers. However, the fee basis data system used by VHA CBO to review these payments would not be able to link the charges for these three treatments together as a single episode of care for this veteran’s office visit with an orthopedic surgeon. Not being able to group charges from fee basis providers by episode of care has the following disadvantages in terms of monitoring fee basis care and potentially reducing costs: Monitoring challenges. From a monitoring perspective, not having data by episode of care prevents VA from efficiently identifying areas of utilization growth or unusually high spending. For example, VA- wide episode of care monitoring would allow VHA CBO to assess whether opportunities for strategic expansion of VAMC-based services—such as the Durham VAMC operating room expansion and the Biloxi VAMC addition of pulmonary function test equipment mentioned earlier—would be possible in more VAMCs. Episode of care monitoring would more effectively allow VA to make more consistent strategic decisions about such service expansions. Cost analysis limitations. From a cost perspective, not having fee basis data on an episode of care basis prevents VA from efficiently assessing whether fee basis providers were reimbursed appropriately. Without the ability to monitor fee basis spending by episodes of care, VHA CBO cannot conduct retrospective reviews of VISN and VAMC claims to determine if the appropriate rate was applied for the care provided by fee basis providers. For example, VHA CBO staff cannot verify that fee basis care that should be paid using Medicare “bundled” reimbursement rates were in fact paid using these bundled rates because all individual charges from a veteran’s episode of care cannot be reliably linked. Since VA uses Medicare rates to reimburse fee basis providers for most services, VAMCs and VISNs, like Medicare, use bundled reimbursement rates for some procedures that provide a single payment for closely related services in some cases. Bundled rates are designed to give providers an incentive to furnish care more efficiently as providers retain the difference if the bundled payment exceeds the cost of care.incentive for delivering care more efficiently by making providers accountable if a patient’s treatment costs exceeded the bundled rate. Bundled rates provide a financial To effectively conduct these retrospective reviews, VHA CBO would need to change its claims processing methods and ensure that the VISN or VAMC fee basis clerks processing each provider’s claims assign a claim number to each payment made to a fee basis provider for an episode of care. This claim number would serve as a linkage among the individual service line items in VA’s fee basis data system and allow VHA CBO to group together all payments made for a single episode of care and assess the total cost of that episode of care. In September 2012, VA outlined both short- and long-term plans for improving the fee basis care program following problems highlighted by several OIG audits and a recent congressional hearing. The short-term corrective action plan is made up of a series of tasks to be completed in fiscal year 2013 across six key areas—(1) foundational activities, (2) achieving a sustainable decrease in fee basis care improper payments, (3) recovery and recapture of fee basis care overpayments, (4) building a culture of accountability within the fee basis care program, (5) enhancing internal controls and data integrity, and (6) training and educating VISNs and VAMCs. We found that VA has taken a number of steps to better ensure the completion of its short-term corrective action plan in fiscal year 2013. Specifically, VA has identified clear leadership for tasks, created teams to accomplish tasks that include representatives from across VA operations, sought the input of internal stakeholders—such as VISN and VAMC fee basis unit staff—and external stakeholders—including the VA OIG, set clear target dates for the completion of tasks, and identified methods for assessing whether or not tasks had the desired effect on the fee basis care program. While it is still too early to determine if the efforts included in the short-term corrective action plan will produce meaningful improvements in the fee basis care program, it represents an important first step in increasing accountability for the outcomes of the fee basis care program. (See table 1.) VA is also in the process of developing a long-term strategy for improving its fee basis care program. This long-term strategy includes efforts to develop and implement a new organizational structure for the fee basis care program, consolidate claims processing functions in fewer locations, develop comprehensive guidance for the fee basis care program, implement a new competency-based personnel model, and implement new claims processing software efforts. To date, progress on the development of this long-term strategy has been limited to the development of new claims processing software and initial discussions of the new organizational structure. In February 2013, VA officials told us that the long-term strategy is still in development. VA’s fee basis care program is a critical means for providing accessible health care to veterans. VA has acknowledged that fee basis care is a necessary tool for veterans when a VAMC does not have an available clinical specialist or when veterans face long travel distances to obtain care from VAMC-based providers. VA has also made concerted efforts in recent years to improve the fee basis care program by implementing a number of initiatives designed to improve the program—including new software packages for VISN and VAMC fee basis claims processing units, a new care coordination program, and initiating a program to better coordinate with fee basis providers. Moving forward, we believe it is critical that VA address four areas as potential ways to more effectively manage and monitor the fee basis care program. First, veterans’ eligibility for travel reimbursement may affect whether they are referred to fee basis care or to another VAMC. Some veterans who qualify for travel reimbursement under VA’s beneficiary travel program might elect to seek care at another VAMC without incurring personal travel expenses in lieu of being treated by a fee basis provider. In some cases, this could result in VA paying less for their care than it would if the veteran were treated through the fee basis care program. Should the Secretary of Veterans Affairs exercise his ability to revise the beneficiary travel eligibility requirements to allow for the use of beneficiary travel in cases where it is both more cost-effective for VA and in the veteran’s best interest to receive care at another VAMC instead of a fee basis provider, it could be possible to lower overall fee basis care utilization and spending. In addition, VA does not currently require VAMCs to assess either the cost-effectiveness of reimbursing veterans for travel to another VAMC when determining whether to preauthorize fee basis care in veterans’ local communities. Second, VA should better manage fee basis care wait times and costs. VA currently does not include fee basis care wait times in the measures it uses to assess VISN and VAMC directors’ performance and does not track the amount of time veterans wait to see a fee basis provider. As a result, the VAMCs we reviewed are referring veterans to fee basis providers to ensure they meet the wait time performance goals for VAMC- based clinics. Having data on wait times for veterans referred to fee basis providers would help VA better determine if veterans are receiving comparable access to fee basis providers and VAMC-based providers. Third, VA may be missing an opportunity to reduce the cost of inpatient fee basis care by not requiring VAMC-based utilization management programs designed to regularly assess VAMC capacity to consider veterans being treated by non-VA inpatient fee basis providers. Incorporating veterans treated by non-VA inpatient fee basis providers into ongoing VAMC utilization management programs would allow VAMCs to identify situations when they no longer have capacity limitations and can complete a veteran’s treatment in-house at a lower cost than the fee basis provider. Finally, VA can also improve its capability to more effectively monitor the fee basis care program. VA Central Office’s monitoring efforts are limited by the inability to analyze fee basis care data by episode of care. Because information that would allow VA to pull together all services with a single office visit or inpatient stay is not available, VA Central Office cannot effectively monitor the payments made by fee basis care units or ensure that fee basis providers are billing VA appropriately for care. To effectively manage fee basis care spending, we recommend that the Secretary of Veterans Affairs take the following action: Revise the beneficiary travel eligibility regulations to allow for the reimbursement of travel expenses for veterans to another VAMC to receive needed medical care when it is more cost-effective and appropriate for the veteran than seeking similar care from a fee basis provider. To effectively manage fee basis care wait times and spending, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following three actions: Require during the fee basis authorization process that VA providers and fee basis officials determine the cost-effectiveness of reimbursing medically stable veterans eligible for beneficiary travel for travel to another VAMC rather than referring them to a fee basis provider for similar care. Analyze the amount of time veterans wait to see fee basis providers and apply the same wait time goals to fee basis care that are used as VAMC-based wait time performance measures. Establish guidance for VAMCs that specifies how fee basis care should be incorporated with other VAMC utilization management efforts. To ensure that VA Central Office effectively monitors fee basis care, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following action: Ensure that fee basis data include a claim number that will allow for VA Central Office to analyze the episode of care costs for fee basis care. VA provided written comments on a draft of this report, which we have reprinted in appendix III. In its comments, VA generally agreed with our conclusions, concurred with our five recommendations, and described the agency’s plans to implement each of our recommendations. VA also provided technical comments, which we have incorporated as appropriate. In its plan, VA stated that to address our first recommendation, VHA CBO will consider including provisions related to veterans’ travel reimbursement to another VAMC to receive needed medical care when it is more cost-effective and appropriate during a planned upcoming revision to the agency’s beneficiary travel regulations. To address our second recommendation, VA noted that it is working to revise procedures for both its new fee basis care administration model, referred to as Non-VA Care Coordination, and the beneficiary travel program to ensure that the cost-effectiveness of a veterans’ travel to another VAMC or to a non-VA care provider is reviewed as part of the authorization of fee basis care and is included in standard operating procedures and training. To address our third recommendation, VA noted that VHA CBO is completing requirements for a national consolidated monthly wait time indicator to measure performance for fee basis care referrals. However, VA did not acknowledge whether or not the wait time indicators used in this monthly indicator would be the same as those used for VAMC-based care, as we recommended. We support VA’s decision to set wait time goals for fee basis care, but we believe the agency should ensure that wait time goals used for fee basis care are the same as those applied to VAMC-based care. To address our fourth recommendation, VA stated that the new fee basis care administration model, Non-VA Care Coordination, includes a template for managing information transfers from non-VA providers to VA staff that will support the utilization management practices of VAMCs. We support VA’s efforts to standardize this information exchange in its fee basis care administration practices, but encourage the agency to also clarify its utilization management policies to ensure that VAMC utilization management staff regularly coordinate with VAMC fee basis management staff to receive this information from non-VA providers. Finally, to address our fifth recommendation, VA noted that the agency agrees that analyzing episode of care costs is an important part of the agency’s fee basis monitoring activities. VA outlined its plan to analyze existing data systems and determine the most cost-effective method for monitoring episode of care costs. We are sending copies of this report to the Secretary of Veterans Affairs, the Under Secretary for Health, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This appendix provides additional results from our analysis of Department of Veterans Affairs (VA) fee basis data from fiscal years 2008 through 2012. Specifically, the table below provides additional information on how much VA spent on fee basis care and how many unique veterans received care from fee basis providers by fee basis care categories. This appendix provides additional results from our analysis of Department of Veterans Affairs (VA) fee basis data from fiscal years 2008 through 2012. Table 3 provides additional information on how much VA spent on outpatient fee basis care and how many unique veterans received care from outpatient fee basis providers by fee basis care categories. Table 4 provides additional information on how much VA spent on inpatient fee basis care and how many unique veterans received care from inpatient fee basis providers by fee basis care categories. In addition to the contact named above, Marcia A. Mann, Assistant Director; Kathleen Diamond; Krister Friday; Katherine Nicole Laubacher; Daniel K. Lee; Lisa Motley; Rebecca Rust Williamson; and Malissa G. Winograd made key contributions to this report. | While VA treats the majority of veterans in VA-operated facilities, in some instances it must obtain the services of non-VA providers to ensure that veterans are provided timely and accessible care. These non-VA providers are commonly reimbursed by VA using a fee-for-service arrangement known as fee basis care. VA's fee basis care program has grown rapidly in recent years--rising from about 8 percent of VA's total health care services budget in fiscal year 2005 to about 11 percent in fiscal year 2012. GAO was asked to review fee basis care program spending and utilization and factors that influence VAMC fee basis utilization. This report examines how fee basis care spending and utilization changed from fiscal year 2008 to fiscal year 2012, factors that contribute to the use of fee basis care, and VA's oversight of fee basis care program spending and utilization. GAO reviewed relevant laws and regulations, VA policies, and fee basis spending and utilization data from fiscal year 2008 through fiscal year 2012. In addition, GAO reviewed the fee basis care operations of six selected VAMCs that varied in size, services offered, and geographic location. The Department of Veterans Affairs' (VA) fee basis care spending increased from about $3.04 billion in fiscal year 2008 to about $4.48 billion in fiscal year 2012. The slight decrease in fiscal year 2012 spending from the fiscal year 2011 level was due to VA's adoption of Medicare rates as its primary payment method for fee basis providers. VA's fee basis care utilization also increased from about 821,000 veterans in fiscal year 2008 to about 976,000 veterans in fiscal year 2012. GAO found that several factors affect VA medical centers' (VAMC) utilization of fee basis care--including veteran travel distances to VAMCs and goals for the maximum amount of time veterans should wait for VAMC-based appointments. VAMCs that GAO reviewed reported that they often use fee basis care to provide veterans with treatment closer to their homes--particularly for veterans who are not eligible for travel reimbursement. In addition, VAMC officials reported that veterans are often referred to fee basis providers to ensure that VAMC-based clinics that would otherwise treat them can meet established VA wait time goals for how long veterans wait for an appointment. However, GAO found that VA has not established goals for and does not track how long veterans wait to be seen by fee basis providers. VA's monitoring of fee basis care spending is limited because fee basis data do not currently include a claim number or other identifier that allows all charges from a single office visit with a fee basis provider or an inpatient hospital stay to be analyzed together. GAO found that without the ability to analyze spending in this way, VA is limited in its ability to assess the cost of fee basis care and verify that fee basis providers were paid appropriately. GAO recommends that VA revise its beneficiary travel regulations to allow reimbursement for veterans seeking similar care from a fee basis provider, apply the same wait time goals to fee basis care as VAMC-based care, and ensure fee basis data includes a claim number. VA generally concurred with GAO's conclusions and five recommendations. |
In October 1994, the Improving America’s Schools Act, which reauthorized education programs under the Elementary and Secondary Education Act of 1965 (ESEA), revised and expanded drug education under the Safe and Drug-Free Schools and Communities Act of 1994, which is title IV of ESEA. The purpose of the Safe and Drug-Free Schools Act is to create a comprehensive program to support National Education Goal Seven, which is “by the year 2000, every school in the United States will be free of drugs, violence, and the unauthorized presence of firearms and alcohol and will offer a disciplined environment conducive to learning.” School year 1995-96 was the first school year in which the program was in effect. Safe and Drug-Free Schools grants have some of the broad characteristics of block grants that we have identified in previous work. For example, the act authorizes federal aid for a wide range of activities within a broadly defined functional area; recipients have substantial discretion to identify problems, design programs, and allocate resources; federally imposed requirements are limited to those necessary to ensure that national goals are being accomplished; and federal aid is distributed on the basis of a statutory formula. For such grants, accountability plays a critical role in balancing the potentially conflicting objectives of increasing state and local flexibility, while attaining certain national objectives. The Safe and Drug-Free Schools program is discussed as part of Education’s strategic plan required by the Government Performance and Results Act of 1993 (the Results Act). The Results Act requires executive agencies, including Education, to develop a 5-year strategic plan that includes long-term strategic goals, establish annual performance goals, and report on progress toward those goals and objectives. Education’s draft strategic plan for 1998-2002 includes an objective for safe, disciplined, and drug-free schools. Education’s statement of core strategies for achieving this objective make it clear that Safe and Drug- Free Schools will play a major role. In addition, the program is specifically cited in one of the six performance indicators that Education has chosen for assessing accomplishment of this objective. These indicators are to slow recently increasing rates of alcohol and drug use among school-aged achieve continuous decreases in criminal and violent incidents in schools by students between now and 2002; realize continuous improvement in the percentage of students reporting negative attitudes toward drug and alcohol use between now and 2002; improve prevention programs by having the majority of LEAs participating in the Safe and Drug-Free Schools program use prevention programs based on Education’s principles of effectiveness by 1999; ensure, by 1999, that all states collect data statewide on alcohol and drug use among students and violence in schools; and increase significantly by 2000 the number of teachers who are appropriately trained to address discipline problems. The Safe and Drug-Free Schools program, like other Education programs, is subject to other federal laws and generally applicable regulations in the use of its funds and program operations. For example, the Education Department General Administrative Regulations apply to the Safe and Drug-Free Schools and Communities program as well as other grant programs. These regulations establish uniform requirements for administering Education grants and principles to determine costs for activities assisted by the Department. In addition, the Single Audit Act requires each state to conduct annual independent audits of programs in the state that receive federal funds. Some aspects of the Safe and Drug-Free Schools program are also affected by the general provisions of the Improving America’s Schools Act of 1994. In particular, the Improving America’s Schools Act authorizes states to submit a single application for several federal education programs rather than separate program-specific applications. The new consolidated application process, which began with school year 1995-96 funds for Education programs, including the Safe and Drug-Free Schools program, is intended to enhance program integration and reduce SEAs’ administrative burden. The Improving America’s Schools Act also requires Education to establish procedures and criteria under which a SEA may submit a consolidated application or plan. Education’s guidelines state that the consolidated plan should provide a framework for determining, within the context of a state’s school reform plan and other reform initiatives, how the federal programs in the consolidated plan will be used to help all children reach the state’s academic achievement goals. Education’s guidance for the consolidated applications requires states to include some, but not all, of the information required in comprehensive state plans by the Safe and Drug-Free Schools and Communities Act. States must include in their consolidated application their criteria for selecting LEAs for supplemental high-need funding, their plans for spending the 5-percent set aside for state-level program activities, and their process for approving local plans for funding. While the Safe and Drug-Free Schools program’s explicit goal is to reduce drug use and violence in schools, other programs are also likely to influence progress toward this national goal. The Safe and Drug-Free Schools program is one of several substance abuse- and violence- prevention programs funded by the federal government. For example, in fiscal year 1995, 70 federal programs were authorized to provide either substance abuse-prevention or violence-prevention services or both to the youth they serve. Thirty-four of these programs could provide both types of prevention services. Education, which administers the Safe and Drug-Free Schools program, along with the Departments of Health and Human Services and Justice, administered most of these programs, 48 in all, but the rest of the programs were disbursed among 10 other federal agencies or entities. For these 70 programs, the fiscal year 1995 appropriations for services to youth totaled at least $2.4 billion. Multiple programs dispersed among several agencies creates the potential for inefficient services and ineffective use of funds. Although we have not fully examined these multiple programs, the implications of having multiple, unintegrated substance abuse- and violence-prevention programs might be like those for employment training programs—an area we have examined. In fiscal year 1995, we identified 163 federal employment training programs located in 15 departments and agencies. We recently concluded that consolidating these programs could probably reduce the cost of providing job training services because of the efficiencies achieved by eliminating duplicative administrative activities. Furthermore, consolidating similar programs could improve opportunities to increase service delivery and effectiveness. During the past several years, some members of the Congress, in response to constituents’ concerns, have questioned how some states and localities have used funding under both the Drug-Free Schools and Communities Act and Safe and Drug-Free Schools and Communities Act programs. Allegations about misuse of funds have spanned diverse areas of program operation, from curriculum content to administrative expenses. In particular, questions have been raised about the extent to which these funds can be used to support programs, such as comprehensive health education programs, of which drug prevention is just one part; the types of activities sponsored by schools, such as alcohol-free dances; and expenditures for materials, such as pencils and tee-shirts imprinted with drug- and violence-prevention messages (see app. I for the results of our examination of some allegations). The Safe and Drug-Free Schools and Communities Act establishes accountability mechanisms at the federal, state, and local levels. In combination, these mechanisms provide accountability for both spending funds (financial accountability) and reaching national, state, and locally defined goals (program accountability). The act specifies no mechanisms for direct federal oversight of local activities. Rather, the act’s mechanisms for federal oversight of the program focus on state-level programs and activities, while relying on state actions for local program oversight. The act establishes four types of accountability mechanisms: (1) an application process that requires approval of state and local plans; (2) state monitoring of LEAs’ programs; (3) reports on national, state, and local program effectiveness; and (4) LEAs’ use of advisory councils to develop program plans and assist program implementation. Education executes two of the four actions required by the act for ensuring accountability in the Safe and Drug-Free Schools program: approving state plan applications and reporting on national, state, and local program effectiveness. The act requires Education to review and ensure that state plans for Safe and Drug-Free Schools programs conform with federal requirements before providing funding to a state. The act also directs Education to use a peer review or similar process in reviewing state plans and provides detailed requirements for the contents of the state plan. For example, under the act, states must include in their plans (1) measurable goals and objectives for their drug- and violence- prevention programs, (2) a description of state-level program activities, (3) their plans for monitoring LEAs’ programs, and (4) the state’s criteria for identifying high-need districts that will receive supplemental funding for drug- and violence-prevention programs. The act also requires Education to gather data about school violence and drug abuse and to assess the effectiveness of drug- and violence- prevention activities under the Safe and Drug-Free Schools program and other recent federal initiatives. Education expects to report the results of these assessments, along with its recommendations, to the Congress by January 1998. The act also requires, indirectly, that Education collect data from states on the effectiveness and outcomes of state and local programs. That is, under the act, LEAs must provide the state with information about their programs’ effectiveness, which states must then use in their required reports to Education. Under the act, states must use application approval, program monitoring, and reporting as accountability mechanisms for ensuring that Safe and Drug-Free Schools programs conform with federal requirements. States must review applications from LEAs to determine if they are eligible for funding. Through the application process, states must ensure that each LEA receiving funds has (1) measurable goals for its drug- and violence- prevention program, (2) objectively assessed students’ current use of drugs and alcohol as well as violence and safety problems in its schools, and (3) developed plans for a comprehensive drug- and violence- prevention program. The comprehensive plan must describe how the LEA will use its funds; coordinate its efforts with communitywide efforts and other related federal, state, and local programs under this or other acts; and report progress toward the LEA’s drug- and violence-prevention goals. In addition, states may also require the submission of other necessary information and assurances. The act requires each state to monitor local program implementation and report to Education on its progress toward its drug- and violence-prevention goals. Although the act lists several general oversight responsibilities for states, it does not clearly specify actions states must take to meet these responsibilities. For example, although states must monitor local program implementation, the act leaves states to determine how to do this. In addition, it authorizes states to develop their own reporting requirements for LEAs and determine when LEAs must report on their programs. The act requires LEAs to consult with local or substate regional advisory councils in developing applications for state funds. These councils also regularly review program evaluations and other relevant material and make recommendations to LEAs for improving drug- and violence- prevention programs. In addition, these councils distribute information about drug- and violence-prevention programs, projects, and activities conducted by LEAs and advise LEAs on coordinating such agency activities with other related programs, projects, and activities as well as on the agencies administering such programs, projects, and activities. Education’s General Administrative Regulations require the state to oversee the LEA programs to ensure that such advisory councils are used as intended. Because the focus of our analysis was to describe and assess the accountability measures used at the federal and state levels, we did not assess how these advisory councils operate at the local level. The act, in addition to establishing actions federal, state, and local agencies must take to ensure accountability, has some requirements for program content and the types of activities permitted under the law. These requirements are broadly stated, permitting significant discretion at the state and local levels. The act also includes some prohibitions on how funds may be used and restricts Education’s activities regarding curriculum that may be used in state and local programs. Local drug- and violence-prevention programs under the act must be comprehensive. The act requires that comprehensive programs be designed for all students and employees. Programs for students must be designed to prevent use, possession, and distribution of tobacco, alcohol, and illegal drugs; prevent violence and promote school safety; and create a disciplined environment conducive to learning. For employees, the program must be designed to prevent the illegal use, possession, and distribution of tobacco, alcohol, and illegal drugs. The act also requires these comprehensive programs to include activities that promote the involvement of parents and coordination with community groups and agencies. The act identifies a wide range of programs and activities that a LEA may include in its comprehensive program, though the act does not limit LEAs to the examples it provides. For example, programs noted as permissible include comprehensive drug prevention; comprehensive health education, early intervention, student mentoring, and rehabilitation referral programs that promote individual responsibility and offer techniques for resisting peer pressure to use illegal drugs; and before- and after-school recreational, instructional, cultural, and artistic programs in supervised community settings. Activities allowed for these programs include the distribution of drug- prevention information; professional development of school personnel, parents, and law enforcement officials through activities such as workshops and conferences; implementation of strategies that integrate services to fight drug use such as family counseling, early intervention activities to prevent family dysfunction and enhance school performance; and activities designed to increase students’ sense of community such as community-service projects. Funds may also be used for metal detectors, safe-passage zones—crime- and drug-free routes students may take to and from school—and security personnel; such uses, however, are limited to no more than 20 percent of a LEA’s funds and are allowed only if a LEA has not received other federal funding for these activities. The law explicitly prohibits use of program funds for construction (except for minor remodeling), medical services, or drug treatment or rehabilitation. Materials used in Safe and Drug-Free Schools programs must convey a clear and consistent message that the illegal use of alcohol and other drugs is wrong and harmful. The Secretary of Education may not prescribe the use of any specific program curricula but may evaluate the effectiveness of the curricula and strategies used. Most of the funds for state and local drug- and violence-prevention programs must be distributed to LEAs. From the funds awarded to SEAs for state and LEA grant activities, SEAs may reserve no more than 5 percent for statewide activities and no more than 4 percent for program administration. The remaining funds (at least 91 percent) must go to LEAs; in school year 1995-96, this amounted to $313 million. Thirty percent of this amount, $94 million in school year 1995-96, must go to LEAs that the state has determined have the greatest need for additional funds to carry out drug- and violence-prevention programs. The act requires states to provide these supplemental funds to no more than 10 percent of the state’s LEAs, or five such LEAs, whichever is greater. Education uses several mechanisms to execute its responsibilities for ensuring program accountability. Some of these mechanisms are required by the Safe and Drug-Free Schools and Communities Act; others are required or permitted under other generally applicable laws and regulations such as the Single Audit Act and the Education Department General Administrative Regulations. Some of these activities—such as the application review process—are intended to ensure that program activities and expenditures comply with federal requirements. Others seek to determine if programs are addressing national goals. State and local plans form the basis for Safe and Drug-Free Schools accountability. States cannot get Safe and Drug-Free Schools funds without submitting a plan consistent with the act and approved by Education. Education reviews states’ plans for compliance with the act and other federal requirements and for program quality. In addition, state plans provide Education with detailed information on what states want to accomplish with their funding and their program management strategy. Our review of Education’s files on 16 state plans for school years 1995-96 and 1996-97 showed that Education, as required by the act, reviewed state plans and required states to revise plans that did not conform with the law’s requirements before disbursing funding to the states. Education reviewed each application to ensure the completeness and sufficiency of the information provided. When reviewers identified missing or inadequate information, they asked the states to provide additional information, and Education notified states on time that they would receive their grant awards. For school year 1996-97, states submitted their plans on time, and Education again reviewed the plans for conformity with federal requirements. Although Education sometimes requested additional information from states before awarding Safe and Drug-Free Schools funding, the Department also approved some state plans conditionally. In these cases, Education specified in states’ grant award documents additional time—1 year—for them to revise their plans to conform with federal requirements. Education established procedures for its review of state plans and provided its staff with checklists and other forms on which to document the results of these reviews. These procedures varied little for the 2 years encompassing our review. Education documented the results of its review in departmental records, including at least a copy of each state’s plan, the reviewers’ comments, material from each state responding to Education’s request for supplemental information, and grant award documents. In both years, Education’s review included checks for compliance with the act. For example, Safe and Drug-Free Schools program staff initially reviewed plans, checking to make sure each state plan had all of the law’s required assurances, signatures, and plan components. Education asked states whose plans did not pass this review to supply the missing information. Program staff also read state plans, documenting any planned activities that failed to conform with or fully satisfy federal requirements. Program staff then shared the results of this review with state officials, requested additional information, or suggested plan revisions. Education also reviewed state plans for quality as part of its plan approval process for the 2 years we reviewed. For school year 1995-96 plans, Education’s Safe and Drug-Free Schools program staff conducted this review and raised questions with state officials about a variety of program quality issues such as the planned program’s ability to address assessed needs. For school year 1996-97 plans, Education used a peer review process, with program staff from various Office of Elementary and Secondary Education programs as reviewers along with external experts. Education’s process for the quality review was essentially the same for 1996-97 plans as it had been for 1995-96 plans. Education also monitors states’ activities. Monitoring activities include state and local visits, reviews of state audit findings, and investigations by Education’s Inspector General (IG). Each monitoring visit involves an initial visit to a SEA; subsequent visits to local school districts may also be a part of the monitoring visit. Until September 1994, Education’s on-site monitoring visits were program specific; that is, they were made only to review Drug-Free Schools’ state and local program activities. In school year 1993-94, Education conducted program-specific monitoring reviews in three states; in school year 1994-95, Education conducted two such reviews. The Department used a variety of criteria to select states for on-site reviews, including complaints. In September 1994, however, Education changed the way it conducted on-site monitoring reviews. The Department’s new process—called an integrated review process—uses an entire team of Education officialsrepresenting all the federal education programs in which a state participates to review a state’s use of federal aid to reach its educational goals. Education piloted this integrated review process in school year 1994-95, visiting five states. In school year 1996-97, Education visited 20 states to conduct integrated reviews, which included reviews of Safe and Drug-Free Schools programs. In addition, the Department has in the past visited states to resolve allegations of impropriety related to the use of funds under the Drug-Free Schools and Communities Act. Education did so in West Virginia in 1992 and in resolving adverse audit findings in Michigan in 1994. In West Virginia, Education received a complaint letter from a parent and directed the state superintendent of education to investigate. Education officials twice visited West Virginia—first in 1992 and again in 1994—in response to complaints about the curriculum used in one LEA’s Drug-Free Schools program. As part of their review, federal officials interviewed state and local education officials and reviewed relevant curriculum materials. In Michigan, state auditors questioned some LEA expenditures under the Drug-Free Schools and Communities Act. The findings were reviewed by Education’s IG and the program staff. The Department sustained some findings but disallowed others. (See app. I.) Education also uses its reviews of state audit findings and on-site IG reviews to stay informed of state activities. Each year, states’ federally funded programs must be independently audited as part of the federally required single state audit process. These audits—which may include the Safe and Drug-Free Schools program—identify specific findings, such as expenditures not allowable under the authorizing legislation. These findings are resolved by the Assistant Secretary, Office of Elementary and Secondary Education, who sustains or rejects the findings after considering information provided by the auditor and auditee. The single state audits have uncovered improper and questionable expenditures in state and local programs. For example, state auditors in Michigan uncovered questionable state expenditures of federal Drug-Free Schools and Communities Act funding. Their findings triggered a state legislative review of the program. In the last 3 fiscal years, Education’s IG has conducted two studies of activities under Drug-Free Schools. A citizen’s complaint prompted a 1995 audit of certain financial matters in the administration of the West Virginia program. In response to the complaint, Education’s IG sought to determine if one of West Virginia’s regional education service agencies was administering its Drug-Free Schools program in compliance with applicable federal acts and regulations. More recently, in February 1996, the IG issued a report describing the programs offered in nine local Drug-Free Schools programs in eight states. Although the IG work plan for fiscal years 1996 and 1997 includes no audits of any Safe and Drug-Free Schools activities, the 1997-98 draft work plan includes two audits of Safe and Drug-Free Schools and Communities Act activities. The first audit would examine the use of Safe and Drug-Free Schools funds and the amount of such funding reaching the classroom. The second audit would review program performance indicators. In addition, Education issued an audit supplement in June 1996 providing further guidance that will be used, for example, when states audit Safe and Drug-Free Schools activities. The supplement, which pertains to several Education programs amended by the Improving America’s Schools Act, will be used immediately by the states to conduct audits of school year 1995-96 program grantee activities. Suggested audit procedures include reviews of funded activities, expenditures, and other related records to determine whether Safe and Drug-Free Schools funds were used for any prohibited activities. As required by the act, Education is gathering information about the Safe and Drug-Free Schools program. Overall, Education’s data collection and evaluation activities comprise a (1) national evaluation of drug- and violence-prevention activities, including those funded under the Safe and Drug-Free Schools program; (2) national data collection on violence in schools; (3) national survey to gather information about local program improvement activities; and (4) compilation of state-level reports on program effectiveness and progress toward state- and locally defined goals for drug and violence prevention. Education plans to provide information from these components, except the survey of LEAs, to the Congress in January 1998. No date has been established for reporting results of the local survey. Education, in collaboration with the National Institute of Justice, has begun to evaluate the impact of violence-prevention programs as required by the act. The evaluation is designed to describe the types of activities funded with federal violence-prevention moneys, including Safe and Drug-Free Schools funds, and to identify the most promising practices among these activities. To acquire this information, the evaluation will compare matched pairs of schools with similar characteristics, but dissimilar safety profiles, to determine why the schools differ on certain safety measures. The evaluation should provide information about the effectiveness of specific interventions, officials told us, such as peer mediation, as well as broader influences on program effectiveness, such as school order and organization and class size. It will not describe the effectiveness of specific Safe and Drug-Free Schools and Communities programs nationwide. In addition to evaluating violence-prevention programs, Education, through its Center for National Education Statistics, is gathering descriptive data on violence in the nation’s schools. The data were obtained by survey from a nationally representative sample of schools and, in conjunction with existing national databases, will provide detailed information on the extent and nature of violence in schools. Although not required by the act, Education officials told us they plan to survey a nationally representative sample of LEAs participating in the Safe and Drug-Free Schools program to examine program improvement at the local level. The survey, designed to gain information about LEAs’ assessment of program effectiveness and their use of such information in ongoing program implementation, will ask LEAs to report the goals and objectives established for their Safe and Drug-Free Schools programs and the measures they use to assess progress toward these goals. Though plans for the survey have not been completed, Education officials report that this survey should be the first of periodically administered surveys to obtain this information. The state-level reports on program effectiveness required by the act are likely to be the primary source of information about Safe and Drug-Free Schools programs’ effectiveness, both nationally and locally. Education— though not required to do so—has provided states with suggested program performance indicators that may be used to assess and report program effectiveness. However, it is uncertain to what extent data from these indicators will provide information about the effectiveness of Safe and Drug-Free Schools and Communities programs. First, states do not have to use Education’s indicators but may develop and use their own indicators. Second, though the indicators were made available to states in draft form in August 1996, states did not receive the completed data collection instrument until December 1996. As a result, variability in state data collection efforts may prevent some states from providing the desired information, and Education officials acknowledge this. Expecting difficulties in aggregating data from the state-level reports, the Department is working with a private contractor to categorize and summarize the data. Education officials expect state data to conform more closely with Education’s performance indicators, they said, as states become more familiar with the form and have a chance to adjust their own data collection systems. Although the act requires reports every 3 years, Education is providing states with a mechanism to furnish yearly information. Education has no information yet to estimate how many states, if any, will provide information more often than every 3 years. Nearly all states use approved local plans as the primary means for ensuring a local program’s compliance with the act’s requirements as well as a variety of other methods. States’ use of the plans to ensure compliance often begins when LEAs submit their plans for state approval, with states using the approval process to ensure that a LEA’s planned program conforms with the act’s requirements. Once local plans have been approved, state officials monitor local programs, they said, using site visits, telephone contacts, and reviews of reports submitted by LEAs of their program activities and expenditures. A few states reported using a combination of these methods to oversee local programs. States must approve local plans before a LEA may receive its Safe and Drug-Free Schools grant. State approval, however, is not automatic. Ninety-six percent of the state officials responding to our survey said some LEAs had to revise their plans to obtain state approval. A plan could be judged unacceptable for minor or rather major reasons, state officials told us. For example, a plan lacking all the appropriate signatures might require only minor revisions. Other plans, however, such as those lacking measurable goals and objectives or those with budgets that were incongruent with the planned program activities, might require more substantial revision. Most local plans, however, are eventually successfully revised and gain state approval. In school year 1995-96, only a small percentage of LEAs did not receive Safe and Drug-Free Schools funding because their plans were not approved, state officials told us. The act requires states to use a peer review or other method of ensuring the quality of applications. More than half the states use a peer review process. Officials in 29 states told us they use a peer review process; in 19 of those states, the peer reviewers’ decisions are binding. The composition of peer review panels varies by state. In some states, peer review panel members include representatives from the LEAs. Georgia and Virginia, for example, are among the states that reported using LEA representatives as peer reviewers. In other states, such as Colorado, Alabama, and Idaho, peer reviewers come from diverse groups such as the SEA’s staff, Safe and Drug-Free Schools advisory group, and local drug-prevention experts. States that reported using no peer review panel told us that SEA officials review and approve local plans. In the states we visited, officials use their review of local plans to ensure that LEAs’ planned activities conform with the act’s requirements. West Virginia’s coordinator told us that she reviews each local plan for compliance. In Michigan, state officials must certify in writing that each approved local plan conforms with the act’s requirements. We heard similar anecdotal evidence when we spoke with our survey respondents. For example, officials in Arizona and Nebraska also reported reviewing local plans for compliance as part of the local plan approval process. States reported that they monitor local activities and expenditures, in accordance with the act, using a variety of mechanisms, such as site visits and document reviews. Most state Safe and Drug-Free Schools officials who use site visits to monitor said site visits are the most effective method for monitoring LEA activities. Documents reviewed by states include program and expenditure reports from LEAs. States use the local plan to monitor program compliance as well as to develop the framework for site visit observations. A few state officials also cited several barriers to monitoring local activities. The most prominent of these are resource shortages, that is, lack of staff and time. State officials oversee local programs by visiting LEAs, reviewing LEAs’ program and expenditure reports, as well as making phone contacts. In school year 1995-96, state officials in 48 states and Puerto Rico reported making more than 1,900 site visits to local programs; 18 states, Puerto Rico, and the District of Columbia used site visits more frequently than any other oversight method. Although 22 states reported making regular site visits, 12 states selected the sites they visited randomly. Nineteen states reported visiting sites on the basis of LEA requests or complaints. States also selected sites to visit on the basis of other criteria such as the need for technical assistance, the amount of carryover funds, and whether the LEA had received additional funding because it was considered “high need.” When asked how often they expected to perform site visits to local programs, 16 states that performed site visits in school year 1995-96 said they expected to visit each local program once every 3 years. Only 3 states, the District of Columbia, and Puerto Rico expected yearly visits; 19 states said they expected to visit programs every 3 to 4 years. Site visits include a wide range of activities, from reviewing program records to on-site observations, state officials told us. Most of the states that conducted site visits in school year 1995-96, however, reported the following common activities: examining program and financial records; reviewing the local curriculum; and interviewing staff, students, and parents. In addition to site visits, state officials in 31 states and the District of Columbia said they oversee local programs by reviewing documents provided by LEAs. Nine states reported this as the most often used monitoring method. Only five states reported using phone calls or technical assistance contacts as the most often used method for monitoring local activities. (See table III.6 in app. III.) The states we visited use most of the mechanisms cited by our survey respondents to monitor LEAs’ program activities and expenditures. For example, Michigan and West Virginia use site visits and reviews of LEAs’ program and expenditure reports to ensure that programs are implemented in compliance with the act. West Virginia’s coordinator told us she also uses telephone contacts as a monitoring mechanism. Virginia’s coordinator, citing staff shortages as the reason the state could not visit sites in school year 1995-96, said the state relies on its review of LEA expenditure reports to monitor LEA programs. Although the three states’ local reporting requirements differ somewhat, each state requires LEAs to submit an annual progress report, including information on their programs’ activities and expenditures as well as expenditure reports. State officials have established standard policies and procedures for site visits, our research revealed. Michigan’s Office of Drug Control Policy, for example, has developed a “Local Program Review Guide” that SEA staff must use when monitoring LEA sites. The guide has specific questions about the local program’s characteristics, such as curriculum content, parental involvement, and the local advisory council. The state reviewer must document findings for each characteristic. The guide also specifies the type of documentation to be used. West Virginia has also written policies and procedures to guide monitoring practices. In addition to reviewing program records, West Virginia’s State and Drug-Free School coordinator said she conducts interviews with local program administrators and actually observes program activities. Beginning in the 1996-97 school year, she told us, she also plans to include a review of local vouchers in her site visit activities in response to a recommendation by the state auditor. As permitted under the act, all states we surveyed had established reporting requirements for LEAs receiving Safe and Drug- Free Schools funds. Generally, states most often rely on annual reporting, although a few states require semiannual or monthly reporting. For example, 36 states reported that they require LEAs to provide an annual progress report. Three states require more frequent reports. Twenty-eight states said they require an annual expenditure report; 17 states require LEAs to report on their expenditures more frequently. In addition, seven reported that they require monitoring reports of LEAs when the LEAs visit program sites. In addition to these requirements, most states require LEAs to submit a report documenting their expenditures before the state releases funding to them. Twenty-six of the states distribute funds on a reimbursement basis, they said. LEAs use their own funds to pay program costs and are later reimbursed for their expenditures by the state. The timing and information requirements of these reports vary, with some states requiring a more detailed explanation of spending than others. For example, Michigan Safe and Drug-Free Schools officials require LEAs to report just the total amount of money spent as of the date the state requests reimbursement. In contrast, South Dakota requires LEAs to send in copies of their vouchers before being reimbursed for program funds, according to state officials. States must obtain information from LEAs for the required triennial reports to Education describing the implementation, outcomes, effectiveness, and progress of state-level and LEA-operated programs. At the time of our survey, however, many states had little information about the extent and nature of program evaluation activities at the local level. For example, of those state officials who reported local evaluation activities, many did not know the number of LEAs conducting evaluations or the objectives and activities of the LEA evaluations. In addition, we asked state officials what information they planned to include in their triennial reports. Many of the state officials who responded to this question told us they either had not determined what information they would include in their report or that they would include whatever information Education required of them. As permitted under the act, SEAs and LEAs use Safe and Drug-Free Schools funds for a variety of activities. Although states often require LEAs to report on their expenditures, the reported data are seldom routinely aggregated to provide a statewide picture of Safe and Drug-Free Schools spending. State officials do not aggregate expenditure data, they told us, because no reporting requirement exists for them to do so. Although states use their program funds to provide a variety of services, in most states, training and technical assistance for LEA staff and others, including parents, is a frequent investment (see fig. 2 and table III.2 in app. III). Forty-five states and Puerto Rico said they use a portion of their state program funds in this way. Other categories of expenditures reported by many states include curriculum development and acquisition (32 states), violence prevention (27 states), and state-level program evaluation (22 states). Other activities reported included demonstration projects (18 states) or activities to provide cost-effective programs to LEAs (20 states). LEAs provide a broad range of activities to students with Safe and Drug-Free Schools program funds, according to state officials (see fig. 3). These activities include drug-prevention instruction (provided by 91 percent of the LEAs) and violence-prevention instruction (provided by 68 percent of LEAs) and staff training on new drug-prevention techniques and use of new curriculum materials; special one-time events, such as a guest speaker, or drug- and alcohol-free social activities, such as a dance or picnic; parent education/involvement; student support services, such as counseling and referral services; and curriculum development and acquisition. Ninety-one percent of LEAs provide drug-prevention instruction. Staff training is the next most offered activity, with 77 percent of districts reporting such training. The Safe and Drug-Free Schools program is one of several substance abuse- and violence-prevention programs funded by the federal government. The major purpose of the programs is to help the nation’s schools provide a disciplined environment conducive to learning by eliminating violence in and around schools and preventing illegal drug use. States and localities have wide discretion in designing and implementing programs funded under the act. They are held accountable for achieving the goals and objectives they set as well as for the federal dollars they spend. As permitted under the act, states and localities are delivering a wide range of activities and services. Likewise, accountability mechanisms have been established and appear to be operating in ways consistent with the act. The lack of uniform information on program activities and effectiveness may, however, create a problem for federal oversight. First, with no requirement that states use a consistent set of measures, the Department faces a difficult challenge in assembling the triennial reports so that a nationwide picture of the program’s effectiveness emerges. Second, although Education provides a mechanism for states to report information annually, under the act, nationwide information on effectiveness and program activities may only be available every 3 years, which may not be often enough for congressional oversight. The Department of Education provided written comments on a draft of this report, and we incorporated, where appropriate, technical clarifications it suggested. In addition, the Department expressed concern about our observations on the multiple programs designed to address youth violence and drug abuse. In the Department’s view, “the discussion of the numerous Federal programs designed to reduce or eliminate youth drug use or violence treats the topic too generally. While other Federal programs may address various aspects of these two very serious problems, we know of no other Federal program that provides widely available, sustained support to schools to prevent or reduce youth drug use or violence. The draft fails to provide detailed information about these other, numerous Federal programs, and reaches a tentative conclusion about duplication and effectiveness that is not supported by this draft report.” We did not revise our reference to the multiple programs in response to this comment because (1) we state only that the potential for duplication exists among these multiple, nonintegrated programs and (2) we also state that we did not fully examine these programs to document the extent to which this may be true for drug and violence programs. In addition, this background information provides what we consider to be an important general context for considering the Safe and Drug-Free Schools program. The additional detail about the other programs has been reported in our other products cited in the footnotes. We are sending copies of this report to appropriate House and Senate committees and other interested parties. Please call me at (202) 512-7014 or Eleanor L. Johnson on (202) 512-7209 if you or your staff have any questions. Major contributors to this report are listed in appendix IV. The key issue in this allegation—that the state improperly used federal drug education funding to implement a comprehensive health curriculum—resulted from a state legislative review of the Michigan Department of Education’s implementation of a comprehensive school health curriculum. The state review, which had been prompted by parents’ concerns about the curriculum content, uncovered questionable expenditures of federal drug-prevention funding under the Drug-Free Schools and Communities Act for curriculum materials not related to drug education as well as questionable fiscal practices. In addition to the legislative review, Michigan’s Director of Drug Control Policy conducted his own investigation. His review and that of the state auditor concluded that many of the expenditures for the comprehensive school health curriculum violated federal requirements for federal drug-prevention funding. As a result of the state auditor’s adverse audit findings, the U.S. Department of Education became involved. Federal officials reviewed the audit findings and issued final rulings on whether the expenditures under question violated federal requirements. Although state auditors questioned the expenditures for the comprehensive health curriculum, upon obtaining further information from state officials, Education found these expenditures acceptable. Education, however, did find that the Michigan Department of Education had violated other federal requirements in managing federal drug-prevention funding. The 1994 passage of the Safe and Drug-Free Schools and Communities Act contained an administrative provision that authorized the use of Safe and Drug-Free Schools funding and, retroactively, the use of Drug-Free Schools and Communities Act funding for comprehensive health programs. Between 1992 and 1994, members of the state legislature and the director of Michigan’s Office of Drug Control Policy charged Michigan state education officials with improperly using federal Drug-Free Schools and Communities Act funding to implement a statewide comprehensive school health program. The program, called the Michigan Model for Comprehensive School Health Education, sought to educate students about maintaining health and included a drug education component. The program sparked controversy when parents statewide expressed opposition to their state representatives. In response to these complaints, state legislators launched their own inquiry. During their investigations, legislators came to question the appropriateness and legality of using federal Drug-Free Schools funding to implement a comprehensive health education program. In addition, they uncovered questionable administrative practices and expenditures made with Drug-Free Schools funding. In 1994, the Family Law, Mental Health and Corrections Committee of the Michigan State Legislature released a report of its investigation into the Michigan Department of Education’s management of federal Drug-Free Schools funding. The Committee examined seven issues, concluding that the Michigan Department of Education (1)”diverted” federal Drug-Free Schools funds “to activity not related to drug prevention” and (2) illegally restricted local school districts’ discretion in using their drug education funds. The Committee also concluded that “a history of poor grant management and oversight by the department of education” had occurred and found that greater accountability was needed to ensure the proper uses of public funds. Among its recommendations, the Committee called for performance audits of Drug-Free Schools grantees and state-level agencies involved with Drug-Free Schools program expenditures. The Committee’s findings echoed the findings of earlier investigations by the state’s Office of Drug Control Policy. Calling the Michigan Model’s implementation the “Michigan Morass,” the Director of the Michigan Office of Drug Control Policy asserted that the problem rested in “how funds diverted to it were obtained and administered,” especially federal Drug-Free Schools funds. The many problems cited by the director included questionable bidding practices on competitive contracts, potential “double-dipping” by state employees who served as both program coordinators and paid consultants, and the purchase of curriculum materials not directly related to the drug education components of the Michigan Model. According to him, these purchases included giant toothbrushes, a human torso model, dog bone kits, and bicycle pumps. Because of the state audit findings, the issue of the use of Drug-Free Schools funds for delivering a drug education program through a comprehensive school health curriculum came before the U.S. Department of Education for resolution. Specifically, state auditors had found that (1) the Michigan Department of Education failed to “appropriately document to what extent Drug-Free Schools and Communities Act (Drug Free Schools) funds could be used to fund comprehensive health education programs in accordance with statutory and regulatory requirements,” and (2) “the level of funding provided by to support the Michigan Model exceeds the relative weight of drug abuse education and prevention criteria contained in the Michigan Model.”Federal education officials did not sustain these findings. Education’s rejection of these findings rested on its analysis of the federal law, provisions of nonregulatory guidance, and a 1991 ruling by Education’s Assistant Secretary for Elementary and Secondary Education on the issue. Citing federal nonregulatory guidance on this issue, Education pointed out that LEAs may include drug abuse education and prevention in a comprehensive health education program, but the expenditure of Drug-Free Schools funds is limited to the drug abuse education and prevention program components. Education also noted that the guidance did not “specify particular methods to be used in determining the proportionate share of a comprehensive health education program to be funded by the Drug Free Schools Act.” Referring to its previous ruling, Education said the Michigan Department of Education had demonstrated through an analysis of the Michigan Model’s curriculum content that the level of Drug-Free Schools funding for the Model was consistent with the Model’s level of drug abuse education and prevention content. Though the state auditor challenged the Michigan Department of Education’s methodology for determining program content, Education ruled that “the auditors provided no evidence to demonstrate that the methods used by the subcommittee were in violation of any statutory or regulatory requirements.” Education concluded, “Consequently, there is insufficient information to establish that the [Michigan State Department of Education] has violated the requirements contained in the [Drug-Free Schools and Communities Act] and other applicable regulations related to the proportionate use of these funds for the Michigan Model.” Though Education officials rejected auditors’ findings on the uses of Drug-Free Schools funds for implementation of the Michigan Model, it sustained audit findings on several other points. In brief, Education sustained audit findings that the Michigan Department of Education failed to (1) respect the broad discretion granted local grantees in developing their drug education programs, (2) ensure that LEA grant application requirements were fulfilled, and (3) evaluate programs in accordance with federal requirements. The Department required the state to take appropriate corrective actions. A LEA’s use of Drug-Free Schools funding to provide out-of-town training for members of its school/community coalitions led to concerns that these expenditures did not meet federal criteria. Although the Drug-Free Schools Act permitted a wide range of activities, state and local education agencies were also required to adhere to the Education Department’s General Administrative Regulations. These regulations include a requirement that costs be “necessary and reasonable” and discuss the allowability of certain kinds of costs. The state learned of the allegation when a caller reported the alleged misuse of funds to the Governor’s Fraud Hotline. The complaint was forwarded to the Virginia Department of Education’s internal auditor for an investigation, which included interviews with local officials and a review of county auditors’ report on the LEA’s expenditures. Ultimately, state officials concluded the expenditures were allowable under federal requirements but expressed concern about the appearance of fiscal impropriety. The entire matter was resolved without federal intervention. In 1995, the Governor’s Office, through its fraud hotline, received an allegation charging the Fairfax County Public Schools with the misuse of Drug-Free Schools funds. An anonymous caller to the hotline alleged that Fairfax County school district officials were using federal Drug-Free Schools and Communities Act (Drug-Free Schools) funds for staff training sessions at an expensive summer resort. The call was referred to the Virginia State Department of Education’s internal auditor for investigation. State officials learned that the Fairfax County Public Schools had sponsored a total of 11 training sessions—each for 2-1/2 days—between March 1994 and April 1995 in St. Michael’s, Maryland. The sessions, designed to facilitate the formation of school-community coalitions to support and enhance school-based drug use prevention activities, trained community representatives, business owners, school board members, alternative school staff, and members of the Fairfax County Board of Supervisors. In all, the district trained 876 individuals at a total cost of $181,397.71, or $207 per participant, according to Fairfax County public school officials. In the course of their investigation, state officials also learned that the district’s fiscal year 1994 expenditures had been audited to determine if Fairfax County Public Schools’ Drug-Free Schools and Communities grant was being administered in compliance with federal and state requirements. The subsequent audit report discussed the expenditures for the district’s training sessions in St. Michael’s. Auditors concluded that federal statutes had not been violated but stated the training sessions could be seen as excessive, unnecessary, and social in nature and cited Education Department General Administrative Regulations requirements that expenditures be “necessary and reasonable for proper and efficient administration of the grant.” The auditors cited the Regulations’ requirements that the grant not authorize expenditures for entertainment or social activities, including “costs for amusements, social activities, meals, beverages, lodging, rentals, transportation and gratuities.” Although the auditors concluded that the training expenses had been reasonable—the room expenses were no more than an average hotel room in the Washington, D.C., metropolitan area, and meals had been reasonably priced—they questioned the need to hold the training sessions out of state. On the basis of the local auditor’s findings and information obtained from district officials, Virginia State Department of Education officials concluded the costs for the St. Michael’s training sessions were reasonable. Though commending the LEA’s “School/Community Action Team” concept, state officials cautioned the district to take special precautions in guaranteeing that the district’s activities and expenditures were viewed by the school as necessary, reasonable, and consistent with the purposes of the Drug-Free Schools grant. The state fully reimbursed the district for each training session after the audit findings were discussed, and the state made procedural changes to avoid a similar incident in the future. The key issue in this allegation—that the state failed to ensure that local programs deliver a clear “no use” message and that locals comply with federal requirements for expenditures and financial management—has been addressed by federal reviews of state and local activities under the Drug-Free Schools Act. Regarding the lack of a “no use” message, federal officials found that insufficient evidence existed to support this claim. As noted previously, federal officials did observe instances of noncompliance with financial management requirements. However, both the SEA and the LEA have taken steps to correct these problems. In March 1995, the Chief Counsel of the House Subcommittee on National Security, International Affairs, and Criminal Justice met with a West Virginia parent to discuss her concerns about drug education and prevention programs. In subsequent correspondence with the Chief Counsel, the parent reiterated her concerns, charging a lack of accountability on federal officials’ part in ensuring state and local compliance with the Drug-Free Schools Act. Local officials, she said, implemented a curriculum teaching “that only abuse of a drug is harmful, leading our youth to believe and implying that moderation and occasional use of cocaine, marijuana, or alcohol might be an acceptable choice for themselves.” The parent also said she had withdrawn her children from her district’s drug-education program but expressed concern for children still enrolled in the program. The parent’s letter to the Chief Counsel was not the first expression of her concern about West Virginia’s implementation of the Drug-Free Schools Act. For example, she asked federal officials in the U.S. Department of Education in 1991 to conduct a formal investigation of the QUEST curriculum used by her West Virginia school district, Jefferson County.Characterizing the curriculum as “non-directive,” she said she objected to the curriculum’s lessons in self-esteem and values clarification. The concerns she raised ultimately resulted in a program review by Education’s Drug-Free Schools officials and a limited-scope audit by Education’s Inspector General (IG). In addition, the Office of National Drug Control Policy, at this same parent’s request, reviewed the QUEST curriculum to assess its compliance with federal statutes. Both entities concluded that the curriculum violated no federal statutes. Federal officials performed two site reviews of Drug-Free Schools programs in West Virginia. The first, conducted in 1992, was performed in response to allegations that the county violated federal requirements when it failed to adopt and implement a program to prevent students’ use of illicit drugs and alcohol. As part of their review, federal officials interviewed appropriate state and local educational agency personnel and examined relevant texts and other materials. As a result of this review, a Department official concluded in September 1992 “that there is sufficient evidence to indicate that Jefferson County does offer a drug prevention program for students in all grades.” Education officials conducted another review of West Virginia’s Drug-Free Schools program, focusing on SEA activities, in 1994. The review uncovered several problems with administrative practices, including the following: The West Virginia Department of Education incorrectly calculated LEA awards in fiscal years 1993 and 1994. LEA applications failed to require all the information and assurances specified by the federal statute. LEA applications did not, but should, include information that allowed the SEA to assess the use of Drug-Free Schools funds at the local level. The West Virginia Department of Education failed to separately account for program activities and expenditures versus administrative activities and expenditures. The West Virginia Department of Education may wish to require receipts or other evidence from LEAs before reimbursing funds for program activities. The report also noted significant improvements in the state’s monitoring of and technical assistance to LEAs. In addition, federal officials commended the West Virginia Department of Education on its peer review process. In 1995, Education’s IG performed a limited-scope audit of selected aspects of Regional Education Service Agency VIII’s (RESA VIII) administration of the federal Drug-Free Schools and Communities Act programs to determine if the agency was administering the federal Drug-Free Schools program in compliance with applicable statutes and regulations. Overall, the IG found the agency’s internal controls for providing management with reasonable assurance that assets are safeguarded against loss from unauthorized use or disposition and that transactions are executed in accordance with management’s authorization and recorded properly to permit correct financial reporting—sufficient for the Drug-Free Schools program. The IG cited two cases of material noncompliance with federal laws and regulations, however. First, RESA VIII had failed to fulfill requirements of the federal Single Audit Act of 1984 by not conducting annual audits. Second, RESA VIII used an inappropriate indirect cost rate during fiscal years 1992, 1993, and 1994 when it based its indirect cost on that of its fiscal agent, Berkeley County. The IG’s recommendations included instructions to both the RESA and the state. Recommendations to RESA VIII included (1) that the agency develop appropriate, reasonable indirect cost rates for fiscal years 1992 through 1994 and (2) obtain audits for all years required in accordance with the federal Single Audit Act and applicable regulations. The IG also recommended that the West Virginia Department of Education (1) cease requiring grantees of federal funds to use inappropriate indirect cost rates, (2) require RESA VIII to develop and submit to the West Virginia Department of Education its own indirect cost rate in accordance with federal requirements, and (3) require RESA VIII and all other RESAs to report to the Department their indirect cost rate audit results. To address your concerns about Safe and Drug-Free Schools’ accountability provisions and their implementation, we asked four questions: (1) What accountability measures are required under the act at the federal, state, and local levels? (2) What activities are used by Education for overseeing state and local programs? (3) How do SEAs ensure local programs’ compliance with the act? and (4) What specific uses are made of Safe and Drug-Free Schools funding at the state and local levels? To determine what is required under the act, we reviewed relevant documents, such as the act and its legislative history, relevant sections of the Code of Federal Regulations, and other related legislation. To assess what actions Education is taking, we followed up on allegations of impropriety in three states (Michigan, Virginia and West Virginia), reviewing documentation and interviewing state and local officials involved in the original incident and in the investigation and resolution (see app. I for a description of each of these site visits). We also reviewed documents at Education’s headquarters in Washington, D.C., and interviewed Department officials. In addition, we reviewed Department of Education state files for 16 states: Connecticut, Delaware, Illinois, Indiana, Iowa, Massachusetts, Michigan, Missouri, Nebraska, Nevada, New York, Rhode Island, Tennessee, Texas, West Virginia, and Wyoming. These state files included documentation, such as a copy of the state’s plan, the reviewers’ comments, materials from the state responding to Education’s request for supplemental information, and grant award documents. States were selected using a stratified, random sample. To select states for site visits, we used two main techniques to help identify allegations. First, we followed up on leads provided by correspondence to a member of the Congress. For example, a set of seven letters given to us alleged improper use of funds. We reviewed these letters and called all seven authors to clarify their complaints. On the basis of the letters and phone calls, we eliminated six of these allegations from our investigation because they concerned curriculum issues. Because the Safe and Drug-Free Schools Act makes curriculum a state and local issue— the Secretary of Education is specifically prohibited from prescribing or proscribing specific materials or approaches—curriculum could not be used as a basis for inappropriate use of federal funds. We did visit the site of the seventh allegation—West Virginia. We also chose the West Virginia program because it had been audited by the Inspector General (IG) of the Department of Education and was the subject of other Department of Education reviews, providing us with much information that could be reviewed in a relatively short time. Second, in reviewing the legislative history, we found that a floor debate in the House had mentioned a number of other allegations. One, the alleged misuse of funds in Virginia for training retreats held in a resort location in Maryland, had been the subject of investigations, giving us ample data to review. Therefore, we chose Virginia for a site visit. Finally, the use of Drug-Free Schools and Communities Act program funds in Michigan for a comprehensive health program had already prompted a large state-level investigation. We chose Michigan for a site visit because of the importance of this investigation. To determine what oversight was required and assess accountability activities at the state and local level, we surveyed the 50 states, the District of Columbia, and Puerto Rico about their activities, receiving information from all 50 states, Puerto Rico, and the District of Columbia. Although we did not verify the data the states supplied us, we did review supporting documentation they provided and used our site visits to Michigan, Virginia, and West Virginia to collect examples of how the law was being implemented and to observe accountability practices at the state and local level. Most information about state accountability, however, collected through the questionnaire and follow-up phone calls was reported by SEAs. Our work was conducted from February 1996 to May 1997 in accordance with generally accepted government auditing standards. The tables in this appendix provide information, by state, on selected aspects of states’ Safe and Drug-Free Schools and Communities programs. Table III.1 provides the amount of each SEA’s school year 1995-96 allocation; tables III.2, III.3, III.4, and III.5 provide information on the activities funded by Safe and Drug-Free Schools grants. Information about state accountability mechanisms, such as methods used for monitoring and distributing funds, appears in tables III.6 and III.8. Table III.7 provides information on private school participation in the Safe and Drug-Free Schools program. Table III.9 provides information on how states selected their neediest districts. Table III.3: Percent of LEAs in Each State Providing Selected Services, School Year 1995-96—Teacher/Staff Training, Drug-Prevention Instruction, Violence-Prevention Instruction, Curriculum Development/Acquisitions, and Student Support Services (continued) (continued) The District of Columbia, Hawaii, and Puerto Rico are not included because they each have one LEA. See app. I. Data were collected by state but were not available at the time of our survey. LEAs provided this information to the state; however, the data were not aggregated at the state level. No knowledge. (continued) The District of Columbia, Hawaii, and Puerto Rico are not included because they each have one LEA. See app. I. Data were collected by state but were not available at the time of our survey. LEAs provided this information to the state; however, the data were not aggregated at the state level. (continued) The District of Columbia, Hawaii, and Puerto Rico are not included because they each have one LEA. See app. I. Data were collected by state but were not available at the time of our survey. Some LEAs provided more than one other type of service. LEAs provided this information to the state; however, the data were not aggregated at the state level. Telephone calls (monitoring) (continued) Telephone calls (monitoring) (Table notes on next page) Missing data. Number of private not-for-profit schools participating (continued) Missing data. Data were collected by state but were not aggregated at the state level. (continued) (continued) In addition to those named above, the following individuals made important contributions to this report: John Carney helped design the questionnaire, gather survey information from states, reviewed Department of Education files, wrote summaries of those reviews, and drafted sections of the report; D. Catherine Baltzell, Dianne Murphy Blank, and Deborah Edwards helped design the study and advised on methodology; Edward Tuchman helped gather survey data from states and, with Wayne Dow, provided survey analyses; and Linda Stokes and Sheila Nicholson helped gather survey information from states. In addition, Robert Crystal and Julian Klazkin performed the legal analysis and provided ongoing legal advice. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed: (1) the accountability measures the Safe and Drug-Free Schools and Communities Act requires at the federal, state, and local levels; (2) the activities that the Department of Education uses for overseeing state and local programs; (3) how state education agencies (SEA) ensure local programs' compliance with the act; and (4) how Safe and Drug-Free Schools funding is specifically used at the state and local levels. GAO noted that: (1) the Safe and Drug-Free Schools program is one of several substance abuse- and violence-prevention programs funded by the federal government; (2) the act that authorizes the program requires four major types of actions to ensure accountability on the federal, state, and local levels: (a) an application process requiring approval of state and local program plans; (b) monitoring activities by state agencies; (c) periodic reports and evaluations; and (d) the use of local or substate regional advisory councils; (3) Education oversees state programs directly and local programs indirectly through required state actions; (4) working along with states, Education reviews, helps states to revise, and approves state plans; (5) Education has issued no program-specific regulations on the act; (6) Education does require states to conform to general and administrative regulations and advises states on program matters, such as allowable expenditures, through nonbinding guidance; (7) the Department may get involved in resolving allegations of impropriety in the use of funds; (8) no overall evaluations of the Safe and Drug-Free Schools program have been completed; (9) Education conducts evaluation activities designed to provide both descriptive and evaluative information about the programs; (10) Education's evaluative activities focus on broader aspects of program implementation; (11) Education is indirectly gathering information about the effectiveness of specific state and local programs through reports states must submit to Education every 3 years; (12) the lack of uniformity in what states report may create a problem for federal oversight; (13) nearly all states use the approved local plans to ensure local programs' compliance with the act's requirements; (14) states use local compliance with the approved plans as a way of ensuring that funds are spent on activities permitted under the act; (15) most states use both on-site visits and local self-reports to oversee local program activities; (16) local education agencies (LEAs) are also required to evaluate the effectiveness of their programs; (17) SEAs and LEAs use Safe and Drug-Free Schools funds for a variety of activities; (18) states mostly use their 5-percent set-aside for activities such as training and technical assistance; (19) ninety-one percent of LEAs provide drug-prevention instruction; and (20) staff training is the next most offered activity. |
The Marine Corps’ HMX-1 squadron uses a fleet of 19 VH-3D and VH- 60N helicopters to transport the President in the national capital region, as well as when the President is traveling in the continental U.S. and overseas locations. These aircraft have been in service for decades. The events following the September 11, 2001, terrorist attacks on the United States highlighted the need for improved transportation, communication, and security capabilities for the presidential helicopter fleet. As a result, a program (subsequently designated the VH-71 program) was initiated in April 2002 to develop aircraft to replace the helicopters currently in service. Initial plans to field the VH-71 by 2011 were accelerated in response to a November 2002 White House memorandum directing that a replacement helicopter be fielded by the end of 2008. By 2009 significant cost growth, plus schedule delays and performance issues resulted in the decision to terminate the VH-71 program. At the time of termination, in June 2009, the estimated VH-71 program cost had doubled from about $6.5 billion at development start in 2005 to $13 billion. Because there remained a need to replace the current in-service presidential helicopters, the Office of the Under Secretary of Defense for Acquisition Technology and Logistics (OUSD(AT&L)) directed the Navy by late June 2009 to present a plan to develop options for a new program to acquire replacement aircraft, now designated VXX. The Navy’s VXX efforts began immediately with the initiation of an AOA to assess options on how to proceed toward developing and fielding the replacement presidential helicopter. It was focused, at least in part, on one of the primary lessons learned from the VH-71 program experience— the need to establish and maintain a sound business case. A sound business case is one in which a balance is established between requirements, costs, and schedule that reflects an executable program with acceptable risk. According to program officials, the program would be aligned to pursue a best practices knowledge-based acquisition approach with the intent of establishing and maintaining an affordable business case. Last year, we reported that VXX program’s entry into development had been delayed as the program worked to provide a sound initial business case for development, which is a best practice that was not followed by the terminated VH-71 program. The Navy had produced an initial AOA report under June 2010 study guidance from DOD’s Office of Cost Assessment and Program Evaluation (CAPE). This initial work presumed an acquisition strategy under which the program would start in the technology development phase of DOD’s acquisition process. The Navy released this initial AOA report to DOD. While CAPE found this initial work sufficient, OUSD(AT&L) did not accept its results. Rather, it identified the need for a 2012 update to address using a streamlined acquisition strategy focused on mitigating cost drivers identified in the 2010 AOA study. Additional guidance was provided by the Office of the Secretary of Defense (OSD) in December 2011. That guidance reflected insights on requirements gained in the 2010 study and expectations of using a streamlined acquisition approach proposed by the Navy. The proposed approach would leverage mature technologies being developed outside of the program before including them on aircraft selected for the program with their adoption being facilitated by open systems architectures. This would allow the program to start with Milestone B approval for engineering and manufacturing development (EMD) and then selecting an existing in-production commercial or military platform and working to integrate communications and mission systems provided by the government, which are expected to be mature by that time. Figure 1 depicts the program’s entry into DOD’s acquisition process as currently anticipated. DOD is investing in the current fleet of presidential helicopters to increase their service life and address capability gaps while working to field VXX aircraft. The current inventory of 19 aircraft are sometimes stressed to meet operational demands—demands that have been growing—making it difficult to take them out of service for lengthy upgrades. A larger VXX inventory of 21 aircraft is expected to help address this. The Navy has made progress in the past year toward establishing a sound business case for development that reflects a rational balance between requirements, costs and schedule. The Navy completed the AOA, which was deemed sufficient by CAPE to inform future acquisition decisions, and OSD has approved the program to proceed to a Milestone B decision. The CAPE did note, however, some areas of caution, for example, that some air vehicles would require aggressive efforts to manage the weight of the VXX while other air vehicles would be more challenged in other respects. We reviewed that AOA and found that it included elements of a sound AOA. The Navy, building on its initial 2010 study, completed its AOA on April 4, 2012, and concluded that the currently proposed acquisition approach of using mature technologies from outside the program on an in-production commercial or military helicopter was acceptable. The initial 2010 study, which considered nine alternative aircraft, revealed that technology development and recertification of aircraft for airworthiness were primary cost drivers of the total projected program cost under the approach it presumed. The 2012 updated study (focused on the most promising aircraft) assessed that mature, certified, and capable in-production commercial and military aircraft exist that can be modified for presidential requirements and be procured under the proposed strategy using a “Customized” rather than a “Min Mod” approach. It determined that there are candidate aircraft with performance characteristics that can meet to varying degrees the February 2012 draft Capability Development Document (CDD) requirements used to conduct the study and found that the Navy’s proposed streamlined acquisition strategy is feasible and would reduce the program’s expected schedule, cost, and risk. Specifically, the 2012 study estimates suggest that using the proposed approach of having the program enter the acquisition process in the engineering and manufacturing development phase rather than the technology development phase, as was anticipated in the 2010 study, would reduce investment cost by approximately $1.5 billion (19.7 percent) and shorten the development schedule by about 18 percent. The 2012 study also revealed, however, that the alternatives offered varying degrees of individual system performance with no single alternative meeting all of the VXX requirements. For example, while one alternative met most of the requirements, it would require strict weight and requirements management throughout its life to avoid a more costly Min Mod approach. While other alternatives provided differing capacities for weight growth, they would be challenged in meeting other requirements, such as range, transportability, landing zone suitability, or material supportability requirements. Further, for all of the alternatives in the 2012 study, it was found that if they were required to meet the military’s airworthiness standards (as opposed to the certifying authority standard they currently meet), the weight growth associated with meeting some of these standards would likely trigger a more costly Min Mod approach. The 2012 VXX AOA study team made a number of recommendations including: To mitigate aircraft certification risk, the airworthiness certifying authority actively participate in all government development activities for the cockpit, communications, and mission systems and be involved in the source selection process for the aircraft; To reduce the risk of having to resort to a Min Mod approach, an active and aggressive life cycle weight management effort should be put in place if the selected platform does not provide a large enough margin to accommodate future weight growth; and The release of a request for proposals be contingent upon achieving acceptable technical maturity of critical government developments, such as communications and mission systems. The Director of CAPE in a May 30, 2012, memorandum concluded that the 2012 study achieves a logical outcome and was sufficient to inform future acquisition decisions. The CAPE found that the AOA demonstrates that each of the alternatives examined can be provided in a manner consistent with the streamlined acquisition approach, though with assessed limitations as described in the report. It concluded that the study also shows that an approach that avoids recertification was feasible for some of the options considered, and, if adopted, offers potential for reduced cost and schedule. There are some areas of caution, however, in the CAPE’s assessment. The alternatives examined in the 2012 study identified similar mission limitations as those seen in the 2010 study. In particular, it was noted that some air vehicles studied exhibited weight sensitivities that would require aggressive weight management for the program’s lifecycle. Other air vehicles exhibited better performance in some aspects including allowing for weight growth, but were more challenged in others, for example, landing zone suitability and transportability. The AOA did not examine the integration risk of the government- developed communications package and mission systems—key system components that under the Navy’s acquisition strategy are being developed outside of the program but must then be successfully integrated into the selected aircraft. The CAPE’s assessment also stated that the validity of the study results was contingent on the reduced requirements in the draft CDD—reduced from the requirements for the VH-71 acquisition—becoming finalized as documented and that a return to the previous requirements would require additional analysis. The Joint Requirements Oversight Council (JROC) subsequently approved the CDD on January 3, 2013. While program officials informed us that there were changes to requirements reflected in the approved CDD, they stated that none would affect the validity of the AOA or require the additional analysis mentioned in CAPE’s memo. A CAPE official subsequently informed us that they had reviewed the CDD and do not believe that any of the changes affect the AOA’s validity. In addition, the official stated that they reviewed the requirements changes and were satisfied that they made sense. In an August 28, 2012 Acquisition Decision Memorandum, the USD (AT&L) approved the VXX to proceed to Milestone B—approval to enter engineering and manufacturing development—as the program’s initial acquisition milestone. The Under Secretary decided that: Milestone B, scheduled for 3rd quarter of fiscal year 2014, will be the first formal acquisition system milestone for VXX; however, a Pre- EMD Review will occur prior to release of the Request for Proposals for development, integration, and production; Prior to the pre-EMD review, the Navy and the JROC are to approve the CDD and the Director, Cost Assessment and Program Evaluation is to develop an Independent Cost Estimate based on the approved CDD; and The Under Secretary will establish affordability targets for the VXX program concurrent with CDD approval by the JROC. The Under Secretary concluded that based on utilization of mature technologies and a proven, mature, existing aircraft, this approach would not require a technology development phase. In addition, the Under Secretary expects to waive a requirement in the Weapon Systems Acquisition Reform Act of 2009, as amended, for competitive prototyping because its anticipated cost outweighs the expected risk reduction and life cycle benefit it would provide. In our prior two reports on this acquisition, we stated that when the AOA was issued we would assess it for its robustness—the range of alternatives it considered, its depth of analysis, and its consideration of trade-offs. Based on our review of the AOA report, supporting material, and interviews of program and other defense officials, we found the AOA to be sufficient for this stage of the acquisition. It included elements that GAO has reported should be part of a robust AOA. We also found it used a cost estimating process that was substantially compliant with GAO identified best practices. An AOA compares the operational effectiveness, suitability, and life-cycle cost estimates of alternatives that appear to satisfy established capability needs. Cost estimating and analysis are significant components of an AOA. We have previously reported on the importance of a robust AOA as a key element in ensuring a program has a sound, executable business case prior to program initiation. Our work has found that programs that conduct a limited AOA (failing to consider a broad range of alternatives or assess technical and other risks associated with each alternative) tended to experience poorer outcomes—including cost growth. We found that the AOA study team considered a broad range of alternatives. The initial 2010 study effort evaluated 9 platforms and 19 possible alternatives to satisfy the mission and the 2012 update studied the most promising platforms in the 2010 study to document the impact of the Navy’s proposed streamlined acquisition strategy would have on the merits of each of those alternatives. The study team assessed effectiveness, suitability, technical, schedule, and operational risks associated with the alternatives, though, according to CAPE, it did not assess the risks of integrating government-furnished communications and mission control systems into those alternatives. The study director noted that this integration risk could not be assessed by the study team, given the maturity of these subsystems at the time. Rather, the AOA identified the need to consider this issue at a future engineering review when sufficient maturity existed and an accurate assessment could be made. The AOA process reflected and influenced performance trade-offs. The initial 2010 analysis was based on performance requirements that were lower in a number of areas than for the VH-71 program. The 2012 AOA study reflected additional trade-offs made with regard to cost, schedule, risk and performance. The performance trade-offs enabled the Navy’s revised strategy, and is expected to result in reduced costs and schedule. Following the 2012 study the performance requirements were further refined as reflected in the final CDD. Table 1 illustrates some of the performance trade-offs made by comparing the minimum requirements for VXX aircraft as captured in the final CDD to the minimum VH-71 requirements captured in the equivalent Operational Requirements Document for that program. We also assessed the cost estimating procedure for the AOA using GAO’s criteria for cost estimating and assessment and found that it was substantially compliant with those criteria. For the purposes of this review, we collapsed the best practices identified in the GAO Cost Estimating and Assessment Guide into four general characteristics: well documented, comprehensive, accurate, and credible. The cost estimating best practices associated with each of those characteristics used in judging the AOA are provided in appendix I. We found the AOA cost estimate to be comprehensive and well documented. We also found that it was substantially accurate and partially met our criteria for being credible. The AOA cost estimate was properly adjusted for inflation, relied on historical analogous aircraft data, contained no significant calculation errors, and had been recently updated from the estimate contained in the 2010 study. While the documentation stated that the estimate reflected most likely costs, it did not specifically identify potential contingency costs and no cost risk analysis was performed to determine a level of confidence for the cost estimate. As result, we were unable to determine if the costs were indeed most likely. In addition, the AOA cost estimate was deemed as having partially met the best practices criteria for being credible as there was evidence that a robust sensitivity analysis had been performed but not an independent cost estimate or cost risk analysis. Although there was not a cost risk analysis, a detailed technical risk assessment process was followed for identifying technical risks, their likelihood of occurring, and the consequences if they occurred. The technical risks were mitigated by incorporating costs into the cost estimate through derivation of realistic and reasonable staffing levels and sufficient schedule for design, development and testing of each alternative. In addition, while an independent cost estimate had not been conducted, the USD(AT&L)’s August 28, 2012, acquisition decision memorandum directs CAPE’s completion of one prior to the Pre-EMD Review. This is to occur in the second quarter of fiscal year 2013. As a result, the program could have confirmation of the AOA cost estimating results at that point. The program will then continue to work on its cost estimate to result in a more refined estimate supporting the Milestone B decision in the third quarter of fiscal year 2014. The Navy’s currently proposed acquisition approach relies on the government’s providing, as government furnished equipment, mature technologies for integration into aircraft. Those technologies either already exist or are in development, some as legacy fleet aircraft upgrades. Their provision will be an important factor in the Navy’s achieving the reduced cost and schedule it seeks through its proposed acquisition approach. The program assesses their risks for integration into VXX aircraft as low—supporting the Navy’s goal of providing initial operational capability in fiscal year 2020. While the program reports the key technologies to be provided by the government for integration are on track, there are risks that they will not work out as planned. For example, the Navy had originally anticipated that the cockpit technologies leveraged into the VXX acquisition would include a “glass cockpit” display system installed as an upgrade on VH- 60N aircraft. It dropped its planned use of this system. Adopting its use would likely necessitate an airworthiness recertification of the platform selected to be the VXX aircraft, a costly and time-consuming endeavor. As a result, the Navy now plans on the prime contractor using the display systems already in use in its certified aircraft—systems that the program manager noted are as capable if not more capable than the VH-60N’s. Even if individual technologies work out as anticipated, they will still have to be successfully integrated in the aircraft. The program depends on a number of government-defined sub-systems and technologies being hosted in a new airframe. Environmental issues such as size, weight, thermal profile, and stability will have to be ascertained, not separately but in totality as a dynamic system. Only then will it be known whether key performance parameters are met, how closely, and what, if any, refinements need to be applied. In the past, we found integration issues can be significant. For example, in fiscal year 2004 DOD rebaselined the Joint Strike Fighter program extending its development by 18 months and adding resources to address problems discovered during systems integration and the preliminary design review. To mitigate integration risk, though, the VXX program is making use of a systems integration laboratory and also plans to install the communications and mission systems into a test aircraft and do demonstration testing before integration efforts begin on the VXX platform. Table 2 provides more information on the technologies to be provided by the government for integration in VXX aircraft. The program has made progress toward establishing a sound business case for development, one that rationally balances requirements, costs, and schedule. The program still faces challenges that will need to be actively managed to provide greater assurance that a sound business case is maintained throughout development as the program moves forward. These challenges include: Maintaining the VXX requirements without significant deviation throughout the acquisition process. Subsequent requirement changes will need to be considered carefully in the context of their implications for cost, schedule, risk, and performance and the program will need to effectively manage technology maturation and integration to achieve success. Managing weight growth of the platform during development so as to not trigger the need for modifications that could then require a flight recertification of the VXX platform. Also, having a weight margin once fielded will place the program in a better position in the future to more readily enhance the platform over its anticipated 40-year service life. Ensuring the technologies being developed for integration into the selected VXX platform develop as needed and integration mitigation efforts are adequately planned, resourced, and executed. Failing to address these challenges could impact the program’s ability to stay on track and delay replacement of the in-service helicopter fleet, which is currently stressed at times to meet demand. Additionally, in our prior reports we described both VH-71 lessons learned and acquisition best practices that, if heeded, should help the program remain on track. DOD provided written comments on a draft of this report. The comments are reprinted in appendix II. In commenting on a draft of this report, DOD stated that it would ensure that mitigations are in place to address potential risk areas. It believes its efforts are aligned with GAO’s best practices and the recommendations in GAO’s 2011 report on the program and plans to continue to monitor program progress in view of these standards. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology and Logistics; and the Secretary of the Navy. This report also is available at no charge on GAO’s website at http://www.gao.gov. Should you or your staff have any questions on the matters covered in this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The cost estimate includes all life cycle costs. The cost estimate completely defines the program, reflects the current schedule, and is technically reasonable. The cost estimate work breakdown structure (WBS) is product-oriented, traceable to the statement of work/objective, and at an appropriate level of detail to ensure that cost elements are neither omitted nor double-counted. The estimate documents all cost-influencing ground rules and assumptions. The documentation should capture the source data used, the reliability of the data, and how the data were normalized. The documentation describes in sufficient detail the calculations performed and the estimating methodology used to derive each element’s cost. The documentation describes step by step how the estimate was developed so that a cost analyst unfamiliar with the program could understand what was done and replicate it. The documentation discusses the technical baseline description and the data in the baseline is consistent with the estimate. The documentation provides evidence that the cost estimate was reviewed and accepted by management. The cost estimate results are unbiased, not overly conservative or optimistic and based on an assessment of most likely costs. The estimate has been adjusted properly for inflation. The estimate contains few, if any, minor mistakes. The cost estimate is regularly updated to reflect significant changes in the program so that it is always reflecting current status. Variances between planned and actual costs are documented, explained, and reviewed. The estimate is based on a historical record of cost estimating and actual experiences from other comparable programs. The cost estimate includes a sensitivity analysis that identifies a range of possible costs based on varying major assumptions, parameters, and data inputs. A risk and uncertainty analysis was conducted that quantified the imperfectly understood risks and identified the effects of changing key cost driver assumptions and factors. Major cost elements were cross-checked to see whether results were similar. An independent cost estimate was conducted by a group outside the acquiring organization to determine whether other estimating methods produce similar results. Key contributors to this report were Bruce H. Thomas, Assistant Director; Jerry W. Clark, Analyst-in-Charge; Bonita J.P. Oden; Karen A. Richey; Jennifer K. Echard; Tisha D. Derricotte; Marie P. Ahearn; Hai V. Tran; and Robert S. Swierczek. | The VXX is a Navy program to develop a replacement for the current fleet of presidential helicopters. The Ike Skelton National Defense Authorization Act for Fiscal Year 2011 directed GAO to review and report annually to the congressional defense committees on the program. GAO has reported on the program twice previously. The first report identified major lessons learned from a prior terminated program that should be applied in the follow-on program. The second covered the program's progress, upgrades to the existing helicopters, and plans for moving the program forward. This is the last of the required reports. It discusses (1) the program's progress over the past year, particularly regarding evaluation of alternatives, and (2) DOD's efforts to develop key technologies for the VXX aircraft. GAO examined program documents; interviewed officials; and compared the AOA with elements GAO previously reported are needed for a robust AOA, and cost estimating and analysis standards. GAO also assessed the Navy's approach to developing key technologies and progress made. The Navy made progress in the past year in establishing a sound VXX business case that reflects a rational balance between requirements, costs and schedule. In 2012, the Navy completed an updated Analysis of Alternatives (AOA) based on refined requirements and an acquisition approach that would leverage mature technologies from outside the program onto an in-production commercial or military airframe--allowing the program to begin in the engineering and manufacturing development phase of the Department of Defense's (DOD) acquisition process. The 2012 AOA reflected additional trade-offs made among cost, schedule, risk, and performance. Some key performance requirements changed from the terminated VH-71 program to the VXX. Per the AOA, using this approach would reduce investment cost by approximately $1.5 billion (19.7 percent) and shorten the schedule by about 18 percent from the approach anticipated in 2010, which included more time and cost to develop technology within the program. DOD's Director of Cost Assessment and Program Evaluation deemed the AOA sufficient to inform future acquisition decisions and the Under Secretary of Defense for Acquisition, Technology and Logistics approved the program to move forward toward a decision to begin engineering and manufacturing development. GAO's review of the AOA found it to be sufficient for this phase of the acquisition. DOD's efforts to ensure key technologies are ready for integration into VXX aircraft are also making progress. The Navy's acquisition approach relies on the government providing mature technologies for integration into an in-production aircraft selected for the VXX program. These technologies either exist or are in development. Their use will be an important factor in achieving the reduced cost and schedule the Navy seeks. While the program reports that these efforts are on track and assesses the risks of integration as low, it is possible that key technologies may not be realized as planned or be as easy to integrate as anticipated. To mitigate integration risk, the Navy is making use of an integration laboratory and plans to demonstrate key technologies in a test aircraft. Building on these decisions, the program will have to manage challenges in a number of areas, including holding the line on VXX requirements, controlling helicopter weight growth, and ensuring that efforts to mitigate integration risks are adequately planned, resourced, and executed. GAO is not making recommendations in this report. DOD stated that it would ensure that mitigations are in place to address potential risk areas. It believes its efforts are aligned with GAO's best practices and the recommendations in GAO's 2011 report on the program and plans to continue to monitor program progress in view of these standards. |
At the federal level, the cleanup of hazardous waste sites is primarily addressed under the Superfund and RCRA corrective action programs. The Superfund program is directed primarily at addressing contamination resulting from past activities at inactive or abandoned sites or from spills that require emergency action. The RCRA corrective action program primarily addresses contamination at operating industrial facilities. In addition to these cleanup response programs, another RCRA program—the closure/post-closure program—is designed to prevent environmental contamination by ensuring that hazardous waste facilities are closed in a safe manner and monitored after closure to the extent necessary to protect human health and the environment. CERCLA created the Superfund program, under which EPA may compel parties statutorily responsible for contaminated sites to clean them up or to reimburse EPA for its cleanup costs. In many cases, liable parties have met their cleanup responsibilities under Superfund. For example, EPA has reported that, as a result of its enforcement activities, liable parties participate in cleanup work at about 70 percent of the sites on the Superfund National Priorities List (NPL), EPA’s list of seriously contaminated sites. However, in some cases, parties responsible for the contamination cannot be identified (for example, at long-abandoned landfills where many parties may have dumped hazardous substances) or the parties do not have sufficient financial resources to perform or pay for the entire cleanup. In the latter case, EPA often settles environmental claims with businesses for less than the cleanup costs if paying for the cleanup would present “undue financial hardship,” such as depriving a business of ordinary and necessary assets or resulting in an inability to pay for ordinary and necessary business expenses. (EPA said it also often settles environmental claims for less than the total cleanup costs if the agency believes making the business pay the full cost would be inequitable.) Further, when parties file for bankruptcy protection, EPA’s recovery of cleanup costs may be reduced or eliminated, particularly when there are few other parties with cleanup liabilities at the Superfund site. To help EPA pay for cleanups and related program activities, the Superfund law established a trust fund. Among other things, the trust fund can be used to pay for cleaning up sites on the NPL. Cleaning up NPL sites has often been a very lengthy process—in many cases, it has taken 10 to 20 years. The cleanup process begins when EPA either conducts cleanup studies for the sites or negotiates with liable parties to conduct such studies. These studies identify the types and quantities of contamination at sites and consider alternative cleanup remedies. EPA then chooses the cleanup remedies it considers most appropriate and performs the cleanups itself or negotiates settlements with liable parties for them to finance and perform cleanups. Historically, a tax on crude oil and certain chemicals and an environmental tax on corporations were the primary sources of revenues for the Superfund trust fund; however, the authority for these taxes expired in 1995. The trust fund continues to receive revenues in the form of recoveries of Superfund-related costs from liable parties, interest on the fund balance, fines and penalties, and general revenue fund appropriations that supplement the trust fund balance. Since fiscal year 2000, the Superfund program has increasingly relied on revenue from general revenue fund appropriations. For fiscal year 2004, for example, EPA’s Superfund appropriation of $1.2 billion was from general revenue only. In contrast, through the 1990s, Superfund trust fund revenues other than general fund appropriations provided more than $1 billion a year in program funding. Further, appropriations for the Superfund program (from both general revenue and trust fund revenues) has decreased from $1.9 billion to $1.2 billion, in constant 2003 dollars, from fiscal year 1993 to fiscal year 2004. Although funding for the Superfund program has decreased, sites continue to be added to the NPL to address serious risks to health and the environment. As of September 30, 2004, there were 1,236 NPL sites. According to a recent study, the cleanup costs for a majority of these sites are under $50 million each and will cost $12 million on average. However, there are 142 Superfund megasites—NPL sites whose cleanup is estimated to cost more than $50 million each—for which the average cost is expected to be $140 million. According to EPA estimates, the vast majority of costs for most NPL sites will be incurred getting to the construction completion stage. EPA officials said that 933 NPL sites have reached the construction complete stage as of July 2005. Despite EPA’s significant progress, a backlog of NPL sites is ready to proceed to construction of a long-term cleanup remedy—which is typically the most expensive stage of a cleanup. The decrease in Superfund funding in recent years and this backlog of sites ready for additional funding may make the already lengthy NPL cleanup process even lengthier. According to EPA, many sites in this backlog are large, complex, and costly. Further complicating the funding situation, as we reported in 2003, the number of sites that do not have an identifiable nonfederal source to fund their cleanup is growing, and several factors indicate the potential for additional growth in the future. For example, officials in 8 of the 10 EPA regions noted that they expected more liable parties to declare bankruptcy in the future. Thus, the number of taxpayer-funded cleanups could increase, especially at sites where there are no (or few) other liable parties. In contrast to the Superfund program, the corrective action program under the Resource Conservation and Recovery Act of 1976 (RCRA), as amended, primarily addresses contamination at operating industrial facilities. Among other things, RCRA regulates the management of hazardous waste from “cradle to grave”—that is, from the time hazardous waste is created and throughout its lifetime, even after it enters a landfill or is incinerated. While EPA has overall responsibility for implementing the act, and retains enforcement authority, it has authorized most states to administer all or part of RCRA’s hazardous waste program. RCRA requires owners and operators of hazardous waste facilities—those used to treat, store, or dispose of hazardous waste and often called “TSDFs”—to obtain operating permits specifying how hazardous waste will be safely managed at the facilities. Owners and operators of hazardous waste facilities are also required to prepare closure plans and cost estimates for removing or securing wastes, decontaminating equipment, and other activities required when they eventually cease operations—such as capping a landfill when it is full. In addition, under the RCRA corrective action program, these owners or operators must clean up contamination occurring at their facilities. This is consistent with one of RCRA’s primary purposes, which is to ensure the proper management of hazardous waste so as to minimize present and future health and environmental threats. A 2002 EPA study on the implementation of RCRA’s corrective action program reported that nearly 900 facilities had undertaken cleanup measures and/or had selected a cleanup remedy by 1997. EPA reported that spills were a major source of contamination at over half of the facilities. The study suggests that those industries with a high risk for contamination requiring clean up under the corrective action program include chemical manufacturing, wood preserving, petroleum refining or other manufacturing industries, and the service sector that includes dry cleaning. In addition, EPA reported that required cleanups under the RCRA corrective action program could be as costly as cleanups at many Superfund sites—EPA estimated that between 2 and 16 percent of the nearly 900 RCRA facilities would have total cleanup costs in excess of $50 million. RCRA’s closure/post-closure and corrective action programs regulate facilities that treat, store, or dispose of hazardous wastes—but, importantly, RCRA does not regulate some facilities that make or use hazardous substances that are not considered listed or characteristic hazardous wastes under RCRA, but that nevertheless may in some circumstances present a high risk for environmental contamination. Businesses may generally store waste on site in compliance with specified requirements for up to 90 days without needing a permit or being subject to the regulations governing hazardous waste storage facilities.Thus, for example, chemical companies that manufacture and sell highly hazardous substances, such as chlorine products, may not be required to obtain a RCRA permit if they do not store their hazardous waste—even though the products themselves may pose environmental risk. RCRA authorizes EPA to issue regulations for the operation of hazardous waste treatment, storage, and disposal facilities, including such additional qualifications as to financial responsibility as may be necessary or desirable. EPA has issued regulations under the closure/post-closure program requiring that owners and operators of certain hazardous waste facilities provide evidence to EPA, or a state regulator, that they have sufficient financial resources to clean up as required for proper closure, and, if necessary, for post-closure care. EPA regulations also require a facility seeking a permit to provide financial assurances to cover any corrective action responsibilities identified in the permit. The principal purpose of financial assurance requirements is to ensure that the parties responsible for environmental contamination assume the costs of cleanup rather than forcing the general public to pay for or otherwise bear the consequences of businesses’ environmental liabilities. That is, financial assurances can help ensure that resources are available to fulfill the businesses’ cleanup obligations as they arise. The fact that the parties responsible for the contamination are also responsible for cleaning it up encourages businesses to adopt responsible environmental practices. Under the RCRA closure and post-closure and other EPA programs, financial assurances can include, among other things, bank letters of credit that guarantee payment by the financial institutions that issue them and, under certain conditions, guarantees that businesses or their parent corporations have the financial wherewithal to meet their obligations. While EPA has not issued financial assurance regulations under the RCRA corrective action program, EPA typically requires that owners and operators provide financial assurances for cleanups of spills or other contamination at hazardous waste facilities in administrative orders the agency issues under this program. Also, as noted above, EPA regulations require a facility seeking a permit to provide financial assurances to cover any corrective action responsibilities identified in the permit. Since, as discussed above, generators of hazardous waste generally are not subject to the RCRA corrective action and closure and post-closure requirements, they are not required to provide financial assurances for any RCRA cleanups that may be needed as a result of their operations. EPA also has not issued financial assurance regulations for the Superfund program, but in some cases does require liable businesses to obtain financial assurances demonstrating their ability to pay cleanup costs for existing contamination at Superfund sites. Specifically, when EPA reaches settlement agreements with parties regarding site cleanups, the agency generally requires the businesses to provide financial assurance demonstrating their ability to pay for the agreed-upon cleanup activities. In this regard, EPA has included financial assurance requirements in its “model agreements” for staff to use in negotiating Superfund settlements. However, if EPA and a liable party do not reach a settlement, there is no regulatory requirement under Superfund that the party provide financial assurance that it will be able to pay its cleanup liabilities. There is, however, a statutory mandate under Superfund law that EPA has not implemented requiring it to issue financial assurance regulations for facilities that handle hazardous substances. As discussed further in this report, these regulations could cover a number of facilities not currently covered by financial assurances under RCRA. Businesses that may incur environmental liabilities under Superfund or RCRA run the gamut in terms of organization type and size—they include large U.S. and international corporations as well as small businesses, such as sole proprietorships. These entities may be publicly held—that is, their stock is traded on public stock exchanges—or they may be closely (privately) held. The different forms of organization—such as corporations and partnerships—have different legal and tax attributes. A corporation is a legal entity that exists independently of its owners or investors, called shareholders. A key attribute of corporations is that they limit the liability of their owners, the shareholders. That is, corporations are liable for the debts and obligations of their businesses, while the shareholders are liable only for what they have invested. In contrast to shareholders, the owners of unincorporated businesses, such as partnerships and sole proprietorships, are generally liable for all debts and liabilities incurred by their businesses but also have tax advantages that corporation owners do not. However, another unincorporated organizational form that is relatively new but is becoming more popular for businesses of all sizes—the limited liability company—provides owners limited liability similar to a corporation as well as tax treatment similar to partnerships and sole proprietorships. Like many corporations, these “hybrids” can have any number of investors (owners), and the investors may include partnerships, corporations, individuals, and others. In general, more financial and ownership information is available about publicly held corporations, which must comply with more federal reporting requirements, such as those of the U.S. Securities and Exchange Commission (SEC), than about privately held corporations. Information about limited liability companies, including those in offshore locations such as the Bahamas, may be limited or unavailable. Information may also be limited or unavailable about special purpose entities—legal entities created to carry out a specified purpose or activity, such as to consummate a specific transaction or a series of transactions with a narrowly defined purpose. Some large corporations, such as Enron, allegedly have used special purpose entities to hide the true financial condition of the companies. Following the bankruptcy of Enron and other corporate failings, the Congress passed the Sarbanes-Oxley Act of 2002 to protect investors by improving the accuracy and reliability of corporate disclosures. Among other things, the law includes requirements governing financial disclosures and audits for publicly held corporations. In addition, in 2003 the Financial Accounting Standards Board, the organization that establishes financial accounting and reporting standards for the private sector, issued revised guidance on accounting for special purpose entities and is currently working on further accounting guidance for them. While some financially distressed businesses simply cease operations, others file for bankruptcy protection. The bankruptcy code is a uniform body of federal law that governs all bankruptcy cases and gives debtors— individuals or businesses—a fresh start or some measure of relief from burdensome debts. Filing a bankruptcy petition gives the petitioner some immediate relief in the form of an automatic stay, which generally bars creditors from commencing or continuing any debt collection actions against the entity while it is in bankruptcy. In bankruptcy, debt can be placed in one of three broad categories: secured, priority unsecured, and general unsecured, which are generally satisfied in that order when a debtor’s assets are distributed in a bankruptcy proceeding. The actual, necessary costs and expenses of preserving the bankruptcy estate are administrative expenses, which must be paid in full before any other class of claims are paid. By definition, administrative expenses must be incurred post-petition because the bankruptcy estate is created by the filing of the bankruptcy petition. Response costs incurred by EPA under the Superfund law post-petition with respect to property of the estate may be entitled to administrative priority. However, environmental response costs at property the debtor does not own are typically considered general unsecured debts, and often are paid at pennies on the dollar—if at all—in a bankruptcy proceeding. The two types of bankruptcy cases most relevant to EPA are chapter 7 business liquidations and chapter 11 corporate reorganizations. Businesses file for bankruptcy under chapter 7 when they are ceasing operations. While some financially distressed businesses cease operations without the formality of bankruptcy proceedings, those that file under chapter 7 use a court-supervised procedure in which a trustee collects the assets of the business (the bankruptcy estate), reduces them to cash, and makes distributions to creditors. In many chapter 7 cases, however, few or no assets are available for distribution. Alternatively, businesses facing financial difficulties may want to continue to operate. These businesses can use the chapter 11 bankruptcy process to restructure unmanageable debt burdens. Most bankruptcy claims EPA pursues in court are chapter 11 reorganizations. EPA’s goals in participating in chapter 11 cases include collecting environmental costs owed to the government, ensuring that the debtor complies with applicable environmental laws and regulations, and ensuring that cleanup obligations are satisfied. The chapter 11 debtor generally has 120 days during which it has the exclusive right to file a plan of reorganization. However, the bankruptcy court can extend or reduce this period. The debtor must provide creditors with a disclosure statement containing information adequate to enable creditors to evaluate the plan, including how the existing debts will be paid. The court ultimately approves (confirms) or disapproves the plan of reorganization. Confirmation of the plan generally discharges eligible debts that were incurred prior to the plan’s confirmation. Certain cleanup obligations, however, such as future cleanup liabilities under RCRA, are not dischargeable under bankruptcy. The debtor normally goes through a period of consolidation and emerges with a reduced debt load and a reorganized business. However, many chapter 11 reorganizations are not successful in that many reorganized businesses subsequently fail and go through liquidation. Bankruptcy cases are heard by U.S. bankruptcy judges in 90 federal bankruptcy courts, which are under 12 regional federal appellate circuit courts. In many instances, applicable law on key questions is unsettled and interpretations may vary among the circuits. For example, interpretations may vary concerning the extent to which post-petition response costs incurred by EPA under CERCLA with respect to property of the bankruptcy estate may be entitled to administrative priority. Businesses may generally file for bankruptcy protection in a bankruptcy court in a state either in which (a) their facilities are located or (b) they are incorporated. In fact, many businesses file for bankruptcy protection in the second and third circuits, which include Delaware and the Southern District of New York. EPA has established a bankruptcy work group comprised of several EPA headquarters staff members, along with one or two staff members from each of the 10 regions, many of whom are Superfund enforcement attorneys who handle bankruptcy matters as a collateral duty. The work group helps identify bankruptcy cases in which EPA may have a claim and assists in resolving other issues that involve contaminated property or otherwise affect EPA’s interests in bankruptcies, among other things. In addition, several Justice Department attorneys participate in the work group. Information on the number of bankruptcies involving environmental liabilities is very limited. For example, while the bankruptcy courts collect data on the number of businesses that file for bankruptcy each year and the Administrative Office of the U.S. Courts maintains these data in a national database, neither the courts, EPA, nor private providers of business data collect information on how many of these businesses have environmental liabilities. Thus, although national bankruptcy data show that 231,630 businesses operating in the United States filed for bankruptcy in fiscal years 1998 through 2003—an average of about 38,600 businesses a year— how many of these had environmental liabilities is not known. Currently, information on bankrupt businesses with federal environmental liabilities is limited to data on the bankruptcy cases that the Justice Department has pursued in court on behalf of EPA and other agencies, such as the Department of the Interior. In fiscal years 1998 through 2003, the Justice Department filed 136 such claims, 112 of which related to hazardous waste liabilities under Superfund and RCRA. The gap in data between businesses that file for bankruptcy and those with environmental liabilities that the Justice Department has pursued in court is large: what is not known is how many of the other 231,494 businesses that filed for bankruptcy during this time period had environmental liabilities. EPA may learn of bankruptcy filings that involve environmental liabilities in various ways—for example, from the businesses themselves or from other federal or state agencies. However, the most systematic notification is from the bankruptcy courts. These courts mail notices of filings to EPA when the agency is listed as a creditor in the bankruptcy filing. Although EPA reviews information about the businesses identified in the bankruptcy notices to determine whether it should request the Justice Department to pursue an environmental claim in the bankruptcy proceedings, the agency does not keep records on the bankruptcy filings it has researched, its basis for deciding whether to pursue a claim related to environmental liabilities, or the characteristics of the businesses involved, such as industry type. Among the factors EPA considers in deciding whether to pursue a claim in bankruptcy court is whether the debtor has any assets remaining to be divided among creditors. In many cases, particularly when the company is ceasing operations under chapter 7, EPA decides not to pursue a claim in bankruptcy court because it concludes that the business involved has few, if any, remaining assets. Similarly, EPA may choose not to pursue a claim when the claim is small relative to the resources needed for the government to pursue it. According to EPA officials, the agency does not routinely collect or maintain information on the bankruptcy cases it reviews but decides not to pursue in bankruptcy because of the volume of bankruptcy notices it receives—including many that do not involve EPA liabilities—and the limited resources available to track such information. While EPA would incur a cost to routinely collect and maintain information about bankruptcies involving environmental liabilities—including those that EPA decides not to pursue—such information would be useful as a management tool, for example, in identifying (1) the types of businesses that have avoided or limited their environmental liabilities by filing for bankruptcy protection and (2) individual business owners who have a history of filing for such bankruptcy protection. The 112 companies with hazardous waste liabilities that the Justice Department pursued in bankruptcy court between 1998 and 2003 represent a variety of industries, including some that could be expected to have significant environmental liabilities, such as chemical companies, metal finishers, hazardous waste recyclers, and paper mills. Other companies, such as Fruit of the Loom and Kmart Corporation, represent industries not immediately associated with a great likelihood of creating environmental liabilities. Most of the companies for which the Justice Department filed a bankruptcy claim on behalf of EPA were undergoing reorganization in bankruptcy rather than liquidating and going out of business. Further, 100 of the cases involved liabilities under the Superfund program, and 12 involved liabilities under RCRA. As of February 2005, 35 of the 112 bankruptcy cases the Justice Department pursued had essentially been completed, and more than half—59—were still ongoing. For example, W. R. Grace and Company and many of its subsidiaries filed for bankruptcy under chapter 11 in April 2001, and this bankruptcy case is still under way as of July 2005. The remaining 18 cases were dismissed by the bankruptcy court for various reasons. In such cases, EPA and other creditors are no longer barred from pursuing claims against these businesses directly. However, EPA may have little success in recovering costs or ensuring compliance with environmental responsibilities if these businesses are, in fact, financially distressed. Over time, the current information gap that exists between businesses filing for bankruptcy and the subset of those for which the Justice Department files an environmental claim in bankruptcy court may be reduced because of new filing requirements that became effective recently. Since 2003, bankruptcy petitions and the accompanying Statement of Financial Affairs have required companies filing for bankruptcy to provide information identifying sites they own or possess that have actual or potential environmental problems, including any sites that pose or allegedly pose an imminent threat to public health and safety. However, this additional environmental information is not yet readily available electronically from the 90 bankruptcy courts in the United States. That is, the systems cannot be queried to identify filings with information on sites with environmental liabilities. However, EPA has sought assistance in this regard from the U.S. trustees who participate in all bankruptcy cases except those filed in Alabama and North Carolina. In August 2004, the Acting General Counsel, Executive Office for U.S. Trustees, sent a memorandum to all U.S. trustees instructing them to coordinate with EPA in bankruptcy cases involving contaminated property. The trustees are to alert the appropriate EPA contact by email when they become aware of an affirmative response to the questions asking petitioners to identify sites with actual or potential environmental liabilities, and to attach the bankruptcy petition and appropriate schedules. EPA officials told us that they have received some notifications from U. S. trustees since this August 2004 memorandum. Because these environmental disclosure requirements are relatively new, little is known about the thoroughness and accuracy of the data on environmental liabilities that companies in bankruptcy have submitted to the courts. We note that the information businesses provide about their environmental liabilities would likely be subject to the same data quality issues as other self-reported data. For example, studies on other bankruptcy filing information from debtor companies, such as information on assets and liabilities, have found that such self-reported data tend to be flawed. Consequently, it is too soon to know the extent to which this additional information provided to bankruptcy courts will help fill the existing data gap relating to bankrupt companies with environmental liabilities. In its efforts to hold businesses responsible for their cleanup obligations, particularly when they are in bankruptcy or other financial distress, EPA faces significant challenges, often stemming from the differing goals of environmental laws that hold polluting businesses liable for cleanup costs and other laws that, in some cases, allow businesses to limit or avoid responsibility for such liabilities. Further, the complexities of the federal bankruptcy code and its associated procedures, along with the complexities of the environmental cleanup process and EPA’s many information needs when dealing with bankruptcies, present challenges to EPA’s ability to hold businesses responsible for their environmental cleanup obligations. A key legal attribute of corporations is that the liability of their owners— the shareholders—is limited. That is, corporations are liable for the debts and obligations of their businesses, while the shareholders are liable only for what they have invested. Aimed at encouraging shareholder investment to generate capital, the limited liability principle enables corporations to engage in enterprises that might not attract sufficient funding if shareholders were not protected in this way. Shareholders generally include individuals, corporations, and unincorporated business forms, such as partnerships. Many businesses take advantage of this limited liability principle to protect their assets by using a parent and subsidiary corporate structure in which the subsidiary is largely or wholly owned by the parent corporation—in other words, the parent is the subsidiary’s shareholder. For example, using this structure, a subsidiary that is engaged in a business that is at risk of incurring substantial liability, such as mining or chemical manufacturing, can protect its assets by transferring the most valuable ones—such as equipment and patents—to a related entity, such as the parent or other subsidiary engaged in less risky endeavors. The high-risk subsidiary can continue to use the transferred assets, as appropriate, by leasing or renting them. It has become common practice for experts in asset protection to recommend that corporations protect their assets in this way. A goal is to continually draw down on the subsidiary’s remaining assets, such as cash from the sale of equipment, to pay operating expenses, including rental and lease payments and salaries. If a liability arises, under the limited liability principle, the high-risk subsidiary’s remaining assets may be reached—but generally not those of the parent corporation or other subsidiaries to which assets were transferred. And if the subsidiary incurs an environmental liability and does not have sufficient resources to fund the cleanup, the burden for the cleanup may be shifted to taxpayers. For example, the subsidiary could plead financial hardship, and under its ability-to-pay process, EPA may reduce the amount of funding the subsidiary has to provide, with the balance coming from the Superfund trust fund in the absence of other liable parties. Alternatively, the subsidiary could seek reorganization under the bankruptcy act, which could result in the discharge of the liability. While these asset protection strategies are generally legal depending on the circumstances, it is generally unlawful to transfer assets with the intent to hinder or defraud creditors. Under federal bankruptcy law, a transfer may be invalidated if it occurred within 1 year prior to the bankruptcy filing and if the transfer (1) occurred with the intent to defraud creditors or (2) in certain circumstances yielded less than reasonably equivalent value for the debtor. In addition, most states have enacted the Uniform Fraudulent Transfer Act, which contains prohibitions on fraudulent transfers analogous to the bankruptcy provision. Creditors generally must seek to invalidate such transfers within 4 years of their occurrence. Perhaps for these reasons, publications by financial and legal advisors have suggested that asset transfers be implemented in stages over time to avoid calling attention to them. The goal is to make them indistinguishable from ordinary business decisions and transactions and to implement them as early as possible, preferably well in advance of claims. From an asset protection standpoint, this approach makes sense because it helps protect transfers from legal challenges by the mere passage of time. However, the use of such strategies by parties liable for environmental cleanups presents a significant challenge to EPA in obtaining cleanup costs because it is hard for the agency to know about such transfers, much less obtain sufficient information to successfully challenge them within the time permitted by law or to challenge businesses’ claims that paying the cleanup costs represents an undue economic hardship. Further, because businesses typically are aware of Superfund liabilities for many years before they actually have to fund the cleanups, they have ample time to reorganize and structure themselves in ways that can limit the expenditures they may be required to make in the future. For example, it is not unusual for it to take 10 or more years in total for sites to be placed on the National Priorities List, for cleanup remedies to be selected, and for the cleanups to be conducted. In addition, to protect assets even further, businesses may be structured with multiple organizational layers—beyond the two-tier parent/subsidiary construct—as well as with different types of corporate entities, such as limited liability companies. As outlined in a recent book on asset protection, dispersing assets among as many different types of entities and jurisdictions as possible is also a useful way to protect them from creditors. The goal of this approach is to create complex structures that, in effect, provide multiple protective trenches around assets, making it challenging and burdensome for creditors to pursue their claims. Because it is easier and less costly to set up and maintain limited liability companies than corporations, this relatively new hybrid form of business organization facilitates the establishment of complex, multi-layered businesses using corporations and limited liability companies. Creditors may go to court to obtain the assets of a corporation’s shareholders (including, for example, a parent corporation) to satisfy the corporation’s debts. This is called “piercing the corporate veil,” and it is difficult to achieve. EPA occasionally attempts to secure cleanup costs from a parent corporation under a veil-piercing theory. However, these cases are extremely complex and resource intensive, according to EPA officials. The strategy recommended to businesses to use multiple organizational layers to protect assets recognizes this challenge and seeks to make any challenge as difficult and costly as possible. Along these lines, an EPA enforcement official—who said that EPA is seeing more and more cases in which companies are restructuring using various layers and thereby shielding corporate assets—noted that the “transaction cost” for EPA to try to follow such cases to ensure that these companies satisfy their environmental liabilities can be prohibitively high. Finally, some EPA officials stated that a 1998 Supreme Court case has further complicated efforts to obtain cleanup costs from parent corporations. Under the Superfund law, past and present owners and operators are among the parties generally liable for cleanup costs at a contaminated site. The Supreme Court decision in United States v. Bestfoods held that a corporate parent could be liable (1) indirectly (as an owner) if the corporate veil could be pierced; and (2) directly (as an operator) if the corporate parent actively participated in, and exercised control over, the operations of the contaminated facility itself. The Bestfoods decision confirmed that the government could hold a parent corporation directly liable under the Superfund law for a subsidiary’s cleanup costs under certain circumstances. However, EPA officials noted that prior to the Bestfoods decision, some courts had found a parent corporation liable where it exercised control over the subsidiary even if the parent did not control the contaminated facility. In addition, while the Bestfoods case recognized that the government could hold a parent corporation directly liable under the Superfund law, these officials stated that the case also helped establish a road map for observing corporate formalities that companies could follow to insulate themselves from this liability. An obvious challenge that EPA faces when it attempts to ensure that businesses in bankruptcy carry out their environmental cleanup obligations is that the businesses may have little or no financial resources to pay EPA or any other creditors. However, EPA faces further challenges when companies file for bankruptcy, stemming from the differing goals of the bankruptcy code and federal environmental laws, the complexities of bankruptcy procedures and environmental cleanup programs, and EPA’s many information needs when dealing with bankruptcies. Federal bankruptcy and environmental laws seek to address vastly different problems using solutions that frequently come into conflict. Specifically, while environmental laws generally impose cleanup costs on the parties responsible for pollution, one purpose of bankruptcy law is to give the debtor a fresh start by discharging existing claims against the debtor, including environmental claims in some cases. For example, when businesses with liability under the Superfund law file for bankruptcy protection, payment of cleanup costs may be nonexistent or substantially reduced in some cases, depending in part on the type of financial assurance the businesses agreed to provide under settlement agreements to meet the obligations. As a result, cleanup costs may be shifted to the general public, especially when the site has no other liable parties. The inherent conflict between the goals of environmental cleanup laws and the bankruptcy code represents only the first of several key challenges EPA faces in attempting to hold businesses in bankruptcy responsible for their environmental cleanup obligations. For example, conflicts relating to the timing of events can have a significant impact on EPA’s ability to recover costs in bankruptcy proceedings. One timing issue relates to the interpretations by various bankruptcy courts of when an environmental liability arises as a claim subject to discharge in bankruptcy. For example, bankruptcy courts in the Second Circuit—where many chapter 11 bankruptcies are filed—generally hold that a claim arises when a release of a hazardous substance into the environment (such as a spill) occurs. In many bankruptcy cases involving responsible parties under Superfund, the relevant releases took place prior to the filing of the bankruptcy petition, making all claims for such releases subject to discharge even if EPA has not yet incurred cleanup or other response costs. Another challenge EPA faces is the need to provide timely estimates of cleanup costs that will form the basis for claims. Bankruptcy courts aim to resolve cases expeditiously and set specific time frames for proceedings, but it can be difficult for EPA to estimate the dollar amount of cleanup work needed at sites within the court’s time frames. In particular, Superfund sites often require long-term investigations to both identify the nature and extent of contamination and to develop cleanup requirements and cost estimates. For many Superfund NPL sites, these processes may take a number of years. Depending upon where EPA is in these processes, it may be challenging to provide an estimate of future cleanup costs. For example, the extent of contamination may still be unknown or the cleanup remedy may not yet have been determined. Nonetheless, the Justice Department must submit a “proof of claim” in the bankruptcy court in order for EPA to have a chance for any cost recovery. With incomplete information regarding future cleanup costs, EPA may underestimate these costs in its claims to bankruptcy courts. Further, if EPA provides a cost estimate that the court rejects because it considers the estimate to be speculative, or if EPA does not have the time or resources to develop an estimate to support its bankruptcy claim, the government can lose any opportunity to recover at least some of the cleanup costs for such sites. Provided that EPA is able meet these challenges and develops a supportable claim for the Justice Department to file in the bankruptcy case, provisions of the bankruptcy code may result in the claim being assigned a low status in the distribution of the debtor’s assets. Many of EPA’s claims may be considered general unsecured claims—the last to be paid after claims for creditors holding secured and priority unsecured claims have been paid. Further, although EPA may submit a claim for environmental penalties and/or fines, under chapter 7, these claims may rank even lower than most other unsecured claims. In some cases, a bankruptcy judge may deem certain EPA claims to be entitled to priority as administrative expenses—for example, if the expenses were incurred to address conditions endangering public health and the environment. Often, however, insufficient funds are available from the bankruptcy estate to pay cleanup and/or closure costs, or they provide only “pennies on the dollar” of the claims amounts when a debtor’s assets are distributed. In these cases, the responsibility for cleaning up a Superfund site or closing and monitoring an RCRA hazardous waste facility may fall to EPA or a state agency unless, for example, other liable parties pay the cleanup costs or sufficient financial assurances are in place to cover these costs. Another important challenge facing EPA in bankruptcy cases results from the automatic stay provision, which preserves the status quo during bankruptcy proceedings, both giving debtors a “breathing spell” from their creditors and preventing the piecemeal distribution of a debtor’s remaining assets in ways that could be preferential to some creditors and detrimental to others. However, the bankruptcy code expressly allows an exemption from the automatic stay for a governmental unit to begin or continue a proceeding to enforce its police or regulatory power, or to carry out a court judgment (other than a money judgment) to enforce its police or regulatory power. If EPA can successfully argue that the environmental proceedings fall within this exception to the stay, it can take action in federal district court while the bankruptcy proceedings continue. If EPA is unsuccessful in avoiding the automatic stay, it must pursue the claim in the bankruptcy court, along with other creditors. The key to when a court will permit an environmental action to avoid application of the automatic stay is how the court defines the phrase “money judgment.” As we reported in 1986, the stay can interfere with efforts of federal and state agencies to ensure that owners carry out their environmental responsibilities, such as cleaning up and properly closing hazardous waste facilities according to RCRA requirements. For example, although companies undergoing liquidation under chapter 7 are required to comply with federal and state environmental laws to the same extent as any other party, they may argue that the automatic stay allows them to avoid expending funds to carry out compliance actions. Companies reorganizing under chapter 11 are also obliged to comply with environmental laws while they are in bankruptcy proceedings even if it requires the debtor to incur additional expenses. Moreover, EPA enforcement officials noted, during a company’s period of reorganization under chapter 11, EPA can pursue administrative expense penalties if the company continues to operate in violation of environmental laws, and has in some cases been successful in this regard. However, an EPA enforcement official also noted that the agency has limited leverage to ensure that such companies continue facility closures, site cleanups, and other environmental responsibilities during the bankruptcy proceedings—that can take years to complete—unless EPA can convince a bankruptcy judge that a company must carry out these activities to address an imminent threat to human health or the environment. The automatic stay also prevents creditors, such as federal and state agencies, from immediately collecting on certain court judgments. Thus, while courts may order businesses to pay environmental fines and/or cleanup costs to EPA, the government’s ability to collect these payments may be reduced or negated by bankruptcy filings. For example, in August 2003, W.R. Grace and Company, the primary liable party at the Libby Asbestos Superfund site in Libby, Montana, was ordered by a U. S. district court to reimburse EPA $54.5 million for costs the agency had incurred in investigating and conducting certain emergency cleanup actions at the site. (Total long-term cleanup costs at this site are expected to rise to at least $179 million.) However, because W.R. Grace filed for bankruptcy protection in 2001 and is protected by the automatic stay, the company does not have to pay this judgment until the reorganized company emerges from bankruptcy. Moreover, EPA officials noted that because any reimbursement of the $54.5 million will be subject to the repayment terms agreed to in the company’s reorganization plan, it has not yet been determined how much the federal government will be reimbursed for these cleanup costs. However, according to the lead EPA attorney working on this case, it is likely that creditors, including EPA, will receive a substantial return in this bankruptcy case once the company’s reorganization plan has been confirmed by the court. In the meantime, according to EPA, the agency continues to pay for and oversee the cleanup work to address the most hazardous conditions at the site, at an estimated cost to taxpayers of $18 million per year over the past several years. In evaluating bankruptcy filings to determine whether EPA should request that the Justice Department pursue cases in bankruptcy court, EPA faces further challenges because it does not consistently have accurate and readily available information on which to base these evaluations. As a result, EPA cannot be assured that it is aware of all relevant bankruptcy filings. EPA officials have acknowledged that the agency could miss identifying some relevant bankruptcy cases. According to the chair of EPA’s bankruptcy work group, one of the more common reasons EPA is likely to miss identifying some relevant bankruptcies is that the debtor fails to include EPA on its list of creditors in bankruptcy filings, which means that bankruptcy courts will not send the notices of bankruptcy filing that are routinely sent to creditors to inform them of the filings. In addition, EPA could also miss relevant bankruptcy cases for other reasons, including the following: Because businesses may change their names over time for various reasons—including reorganizations and mergers—and because a business filing for bankruptcy may be affiliated with a number of different company names, EPA staff may not recognize the business name or names cited in bankruptcy filings. In addition, owners of businesses sometimes file for bankruptcy in their own names, rather than in the business names, which EPA may be more likely to recognize. Data quality problems in EPA’s Superfund database limit the usefulness of automated searches to match the businesses associated with the bankruptcy notices sent to EPA with businesses with environmental liabilities nationwide. Further, even if EPA staff search program and enforcement databases to identify contaminated sites associated with a company, the searches may not be reliable because the current name or names associated with the bankruptcy filing may not be reflected in EPA’s databases. For this reason, some EPA staff do not routinely search these databases for such matches because the information is likely to be incomplete or outdated. However, EPA’s most recent bankruptcy guidance, discussed later, recommends that staff search the Superfund and other relevant databases to help them determine whether an environmental claim or issue of interest is involved. EPA officials said that the agency has some difficulty identifying from its program and enforcement databases which companies have large liabilities, particularly when those liabilities are dispersed across states in several regions. As a result, certain companies in bankruptcy may not capture EPA’s attention as being worthwhile cases for the government to pursue. Overall, EPA’s current system of identifying bankruptcies of concern to the agency relies heavily on the availability of staff with knowledge of the companies and their related environmental liabilities to identify cases that the agency should pursue in bankruptcy court in time to meet the court’s deadlines. Although the chair of EPA’s bankruptcy work group believes that their current approach to timely identification of relevant bankruptcies has worked well under these limitations, she acknowledged that EPA has no assurance that it has not missed some relevant bankruptcies. As discussed above, EPA does not maintain records on all bankruptcy cases that the agency has identified and researched, and the reason the cases were either pursued in bankruptcy court or not. Consequently, information to evaluate EPA’s efforts in identifying and researching relevant bankruptcies is not available. Further, because the bankruptcies of small and medium-sized businesses are not as widely reported in the business press, EPA is more at risk of not identifying relevant bankruptcies of such companies. Some members of EPA’ s bankruptcy work group noted that, in their view, developing a fail-safe system for identifying relevant bankruptcies could require significant additional resources and might not be a cost-effective endeavor. For example, in many bankruptcy cases there may be few, if any, assets available for distribution to creditors. Nonetheless, on May 10, 2005, EPA issued an interim protocol for coordination of bankruptcy matters under the Superfund program that, among other things, (1) recommends actions to better ensure that EPA receives relevant bankruptcy notices and (2) identifies additional actions that may be relevant in bankruptcy cases other than filing claims, such as opposing abandonment of contaminated properties and objecting to terms of plans of reorganization or sales of property. Further, available technologies, such as an EPA Intranet site, could be an efficient and effective tool for the agency to track bankruptcy cases it identifies and reviews. For example, such a site could contain an EPA data sheet on each bankruptcy case identified, as well as key court documents as appropriate and available, that would be readily accessible to EPA staff across the agency to review and update. Even when EPA identifies relevant bankruptcy filings to assess, the agency is hampered by other information limitations. For example, as previously discussed, in many cases, EPA does not yet have adequate information on the extent of contamination at relevant sites and has difficulty in developing supportable cleanup cost estimates for the claim in the bankruptcy case. In other cases, the bankruptcy filings include lengthy lists of sites, some of which EPA may have no information about, including whether there is any liability under federal environmental law. Lack of information about sites can present challenges to EPA in negotiating bankruptcy settlement agreements with large companies, such as Exide Technologies and Kaiser Aluminum, which cover numerous contaminated sites. An EPA attorney who worked on the Kaiser Aluminum case said that the tight time frames under which they had to obtain information about the relevant contaminated sites and the significantly larger resources the company had to support its negotiations made this effort challenging. Another challenge EPA faces is that companies may send EPA notice of their bankruptcy filings identifying sites with no related enforcement actions. According to an EPA official, if a company provides EPA with notice of its bankruptcy filing and EPA does not submit a proof of claim in the bankruptcy court—likely in this situation since EPA would not be aware of any environmental hazard—the claim could be discharged in the bankruptcy process. Consequently, reviews of the environmental disclosures in Exhibit C of the debtor’s bankruptcy petition and the Statement of Financial Affairs are important to identify those sites for which EPA may file a claim as well as those sites about which the agency has no knowledge and can potentially challenge discharge requests to the bankruptcy court. We note that EPA’s May 10, 2005, interim bankruptcy protocol recommends that the agency’s bankruptcy coordinators review these documents in determining whether an environmental claim or issue of interest is involved. Finally, it is a challenge for EPA to have timely and accurate information to identify those instances in which fraudulent transfers of assets may have occurred and which a bankruptcy court would nullify if such transfers were brought to its attention. Generally, EPA has limited, if any, information on the complex organizational structures businesses may be using and on any transfers among entities that may have taken place. Similarly, information is not readily available about privately held corporations or limited liability companies—an organizational form being used by many businesses. For instance, limited liability companies registered in Nevada do not have to provide information about all of the owners, making it difficult for EPA or others to identify transactions among related companies that may be illegal. Because the liable parties often are aware of environmental liabilities for years before they must pay for the cleanups, they have time to reduce their net worth by making business decisions that result in the redistribution of assets—and thus make these resources unavailable for payment of environmental liabilities. According to an EPA enforcement official, it is extremely difficult for the agency to look back on the business decisions a company has made over three or more years to determine whether its actions may have been fraudulent. EPA has authorities and enforcement tools that it could use more fully to obtain cleanup costs from liable businesses, especially those in bankruptcy or other financial distress. Specifically, EPA has not implemented a 1980 statutory mandate under the Superfund law to require that businesses handling hazardous substances maintain financial assurances that would provide evidence of their ability to pay to clean up potential spills or other environmental contamination that could result from their operations. As a result of EPA’s inaction, the federal treasury continues to be exposed to potentially enormous cleanup costs associated with businesses not currently required to provide financial assurances. Also, although EPA requires financial assurances from businesses entering into settlement agreements and orders under Superfund and, as a matter of policy, includes them in settlement agreements and orders under RCRA, the agency has done little to ensure compliance with these requirements. EPA has on occasion used other enforcement authorities, including (1) obtaining offsets, which allow the government to redirect payments or tax refunds it owes businesses to federal agencies with claims against these businesses and (2) filing liens on property for which the government has incurred expenses under Superfund. Greater use of these authorities could produce additional payments for cleanups from liable businesses, even in bankruptcies. Despite a requirement to do so in the 1980 statute creating the Superfund program, EPA has not issued regulations requiring certain businesses that handle hazardous substances to demonstrate their ability to pay for environmental cleanup costs. Specifically, the statute required EPA to issue requirements “that classes of facilities establish and maintain evidence of financial responsibility consistent with the degree and duration of the risk associated with the production, transportation, treatment, storage or disposal of hazardous substances.” Such regulations could help to fill several significant gaps in EPA’s environmental financial assurance coverage, thereby reducing the risk that the general public (i.e., taxpayers) will eventually have to assume financial responsibility for cleanup costs. One gap involves types of waste that are excluded from RCRA coverage. Some wastes associated with mining activities can result in substantial cleanup costs but are excluded from the definition of hazardous wastes and, therefore, are not regulated under RCRA’s hazardous waste provisions. A second gap in EPA’s financial assurance coverage is that hazardous waste generators (such as metal- plating facilities and dry cleaners) are generally not required to maintain any financial assurances. Specifically, businesses may generally store waste in compliance with specified requirements for up to 90 days without needing a permit or being subject to the regulations governing hazardous waste storage facilities. Finally, a third gap is that none of EPA’s current financial assurance regulations require companies or industries that pose a significant risk of environmental contamination to provide assurance that they could meet cleanup obligations associated with future accidents or spills of hazardous substances or wastes. These gaps may be more significant since the authority for an environmental tax on corporations, crude oil, and certain chemicals, which had largely funded the Superfund program, expired in 1995. As a result, the federal government’s general appropriations fund is increasingly funding the cleanups paid for by the Superfund trust fund when responsible parties do not. For example, for fiscal year 2004, EPA’s appropriation for the Superfund program was from general revenues only. Regarding the financial assurance requirements in the Superfund statute, which could help to address these gaps, the statute requires EPA to develop financial assurance regulations for businesses handling hazardous substances. As previously noted, EPA was to use a risk-based approach for both (1) identifying the entities that would be covered and (2) specifying the financial assurance coverage they would be required to have. The law requires EPA to give priority in developing these requirements to those classes of facilities, owners, and operators that the agency determined present the highest level of risk of injury. Once identified, the different classes of facilities that handle hazardous substances—which could, for example, include all businesses in a given industry or all those handling a specific hazardous substance—would be required to maintain evidence of financial ability to cover actual and potential cleanup costs consistent with the degree and duration of risk associated with the production, transportation, treatment, storage, or disposal of hazardous substances. Implementation of this requirement could help to close the financial assurance gaps discussed above because under the Superfund law EPA could require financial assurances for cleaning up existing and future contamination at facilities that handle hazardous substances but are not subject to RCRA’s closure/post-closure or corrective action programs, including many mining sites and facilities that generate, but do not treat, store, or dispose of hazardous waste. EPA may also wish to give priority in developing these requirements to facility owners whose prior actions indicate they may pose a high risk of default on their environmental obligations. Factors EPA may wish to consider in evaluating owner risk include compliance history—such as a history of noncompliance with environmental laws, including cleanup obligations, and magnitude of past, current, and potential environmental liabilities. In applying the Superfund law’s risk-based criterion for developing financial assurance requirements, EPA may want to consider hardrock mining a high priority—for example, gold, copper, and iron ore mining— because it presents taxpayers with an especially serious risk of having to pay cleanup costs associated with wastes from thousands of abandoned, inactive, and operating mines on private lands in the United States. Using a statutory provision that allows solid waste from certain mining activities to be excluded from regulation as hazardous waste under RCRA, EPA has excluded several types of mining wastes from the definition of hazardous waste under RCRA, characterizing them as “low toxicity, high volume wastes.” This exclusion has resulted in a significant gap in financial assurance, as discussed above. In addition, mining activities on private lands are not covered by the financial assurance requirements the Department of the Interior’s Bureau of Land Management (BLM) requires for mines on federal land it manages. However, some mining facilities handle hazardous substances as defined under the Superfund law, and therefore financial assurance regulations issued under the Superfund law could apply to these facilities. According to the EPA Inspector General, mining sites can cause significant environmental problems, and these sites are typically large, complex, and costly to clean up. A March 2004 report by EPA’s Office of Inspector General identified 63 hardrock mining sites on the Superfund’s National Priority List (NPL) and another 93 sites with the potential of being added to the list. At least 19 of the 63 existing NPL mining sites had estimated cleanup costs of $50 million or more. In total, the 63 sites were estimated to cost up to $7.8 billion to clean up, $2.4 billion of which is expected to be borne by taxpayers rather than the parties responsible for the contamination. The EPA Inspector General reported that at least one “clearly viable” party has been identified for 70 percent of the 63 NPL mining sites (including 11 percent where the viable party was a federal agency, such as the Department of the Interior). However, the report also emphasized that EPA should be concerned about the viability of these parties over time because of the long-term nature of the cleanup liabilities at mines. For example, the report states that the projected operation and maintenance period for the cleanup remedy ranges from 40 years to “in perpetuity.” The costs to taxpayers would increase if the liable parties expected to pay for the cleanup remedies proved to be unable to do so. Some mine owners have defaulted on environmental liabilities associated with their mines on multiple occasions, and the cleanup costs for these sites are being or are expected to be borne largely by taxpayers. These owners may reasonably be viewed as at high risk for defaulting on environmental obligations associated with mines or businesses that they currently own. For example, one individual is associated with several businesses that have filed for bankruptcy protection. Like other mine owners with serial bankruptcies involving contaminated mining sites, this owner continues to operate businesses with significant contamination that need to be cleaned up, potentially via the Superfund. If EPA developed and implemented the financial assurance regulations that the Superfund law requires, EPA could require such owners to provide financial assurances now for existing and future cleanups, thereby reducing the amount that taxpayers would otherwise likely be required to pay. A Superfund site in Delaware provides an example of the exposure of the federal treasury to enormous cleanup costs associated with industries not currently required to provide EPA with financial assurances because, as generators of hazardous waste, they were not covered by RCRA’s financial assurance requirements. In the 1980s, when this facility was owned by Standard Chlorine Corporation, it experienced two major chemical releases—including a 569,000-gallon release of hazardous chemicals that contaminated soil, sediment, a groundwater aquifer, and nearby surface water. Because the facility did not treat or dispose of hazardous waste, and did not store waste for more than 90 days, however, Standard Chlorine did not have to provide financial assurance under RCRA for the cleanups. In 1987, EPA added the site to the Superfund NPL because of the extensive contamination. Subsequently a limited liability business, Charter Oak Capital Partners LP, established a subsidiary corporation called Metachem Products, which acquired substantially all of Standard Chlorine’s assets including the facility in 1998, and Metachem accordingly became liable for the Superfund cleanup. However, in May 2002, Metachem declared bankruptcy and abandoned the chlorinated benzene manufacturing facility. EPA estimates that it has incurred about $28 million in cleanup costs to date at this site and that the total cleanup cost will eventually rise to $100 million. Despite the clear benefits that EPA could derive from implementing financial assurance requirements under the Superfund statute, over the past 25 years, EPA has made only sporadic efforts to do so. For example, EPA took some steps early on to identify high-priority classes of facilities but did not complete this effort, although the statute included a December 1983 deadline for this task (see app. II for more detail). In 1983, the Director of EPA’s Office of Solid Waste stated that resources were insufficient to develop and implement the Superfund financial assurance requirements. But EPA never asked the Congress to provide additional funds for this purpose. In 1987, we recommended that EPA set milestones leading to the timely implementation of Superfund financial assurance regulations, but EPA did not implement this recommendation. More recently, an April 2004 internal review of EPA’s Superfund program recommended that the Office of Solid Waste and Emergency Response study whether promulgating new regulations under the broad financial assurance authorities contained in the Superfund law could reduce future Superfund liabilities with respect to facilities not covered under RCRA financial assurance requirements. In response to this recommendation, EPA created a work group that is collecting and evaluating information on the industries and types of facilities that have been listed on the Superfund program’s National Priorities List (NPL). While this study should provide useful and relevant information to EPA—in particular on gaps in the coverage of RCRA’s corrective action program— we believe that the issue for implementing the financial assurance requirement under the Superfund law is broader than the question of which industries have sites that have been listed on the NPL. That is, the key issue is identifying industries at high risk for environmental contamination. EPA and the states have a wealth of information from both existing studies and from the knowledge base of EPA’s and states’ enforcement staff across the country. For example, EPA’s 2002 study on the almost 900 RCRA facilities undergoing cleanup measures under the corrective action program provides relevant information on industries at risk for environmental contamination and on the costs of those cleanups. In addition to not establishing the financial assurance requirements called for in the Superfund law, EPA is not ensuring that the benefits that could be derived from its existing financial assurance requirements for Superfund and RCRA corrective action cleanups are realized. Specifically, in negotiating compliance orders and settlements for these cleanups, EPA generally accedes to the financial assurance mechanism the liable party suggests without routinely determining the risk of the proposed mechanism in light of such factors as the strengths and limitations of the various mechanisms, the financial histories of liable parties, any existing agreements that have reduced the amounts businesses are required to pay for cleanups on the basis of ability-to-pay analyses, and the estimated total environmental liability of individual parties. In addition, EPA has increased the financial risk to the government by not providing adequate oversight and enforcement to ensure that the parties responsible for Superfund and RCRA cleanups obtain and maintain the required financial assurances. EPA has acknowledged that its enforcement of financial assurances has been inadequate and has initiated some actions to address this problem. EPA has generally given companies significant flexibility to choose the type of financial assurance mechanism they will use to demonstrate their ability to meet their obligations under the RCRA corrective action and Superfund programs. While the closure/post-closure program has regulations governing financial assurances, the corrective action and Superfund programs do not. EPA generally accepts the same financial assurance mechanisms in the Superfund and RCRA corrective action programs as are outlined in the RCRA closure/post-closure regulations. Under the closure/post-closure regulations EPA must generally accept the financial assurance mechanism chosen by the party, so long as the party meets the relevant regulatory requirements for that mechanism. The financial assurance mechanisms EPA generally accepts in all three programs are outlined in table 1. Financial assurance mechanisms vary in the financial risks they pose to the government—and thus to taxpayers who may ultimately have to pay for environmental cleanups if the responsible parties default on their obligations; the oversight and enforcement challenges they pose to the regulators, such as EPA, who are responsible for enforcing them; and the costs companies may incur to obtain them. For example, as shown in table 2, while the costs to companies of the corporate financial test and the corporate guarantee mechanisms are low compared with other forms of financial assurance, the relative financial risk to the government and the amount of oversight needed are relatively high. In contrast, letters of credit present comparatively low financial risk to the government and need less oversight but impose relatively high costs on companies. In essence, as the table shows, those financial assurance mechanisms that impose the lowest costs on the companies using them also typically pose the highest financial risks to the government entity accepting them. We note that EPA continues to allow financial assurances that are simply promises to pay—the corporate financial test and the corporate guarantee—even though its 2003 guidance on financial assurance for the RCRA corrective action program underscores the importance of having resources set aside “in the event a company hits a financial decline.” The mechanisms that pose the greatest financial risk to the government— the corporate financial test, the corporate guarantee, and some insurance products—also require specialized expertise to oversee. Concerns have been raised, both within EPA and by others, that the corporate financial test and the corporate guarantee offer EPA minimal long-term assurance that the company with environmental liability will be able to fulfill its financial obligations. In 2000, the Department of the Interior’s Bureau of Land Management (BLM) identified similar concerns when it decided to prohibit new corporate guarantees for future reclamation work to restore lands when mining operations cease. In making this decision, BLM cited both the agency’s lack of expertise to perform the periodic reviews of companies’ assets, liabilities, and net worth that would be necessary to oversee guarantees and the fact that even with annual reviews by skilled staff, a default risk would remain. Further, some concerns about the financial test, such as the following, stem from limitations inherent in relying on financial indicators rather than secured guarantees: The corporate financial test rests on the assumption that a company’s recent financial performance is a reasonable predictor of its financial future. However, the financial test cannot anticipate sudden changes in market conditions or other factors that can dramatically change a company’s financial picture—and a company’s ability to meet its environmental obligations. Once a company’s financial condition declines to the point that the company can no longer pass the financial test, it can be very difficult for the company to meet the requirements, or pay the costs, of obtaining an alternative form of financial assurance from a third-party provider. The financial test is only as sound as the data used to calculate the financial ratios underpinning the test—if a company’s accounting of its net assets or liabilities is questionable, or the quality of its assets is weak, one or more of the ratios may not represent the company’s true financial condition. EPA officials noted that the passage of the Sarbanes- Oxley Act of 2002, with its requirements aimed at improving the accuracy and reliability of corporate disclosures, may have reduced some of these data-related concerns about the financial test, at least for publicly held companies. In addition to these limitations, weaknesses in the financial test itself are actively under discussion. For example, EPA’s Environmental Financial Advisory Board, a federal advisory committee that provides advice and recommendations to EPA on environmental finance issues, has been charged by EPA with reviewing financial assurance mechanisms. In March 2005, the project work group leading this review submitted to the full board for consideration the first draft of a proposed letter to the EPA Administrator commenting on the financial test. In this draft letter, the work group stated that the current test is “an inadequate mechanism for determining financial capacity.” The draft letter also stated that while the EPA financial test is transparent and objective, the test is not sufficiently comprehensive in what it assesses, does not examine and incorporate historical trends, and is not sufficiently rigorous to protect against manipulation. The membership of the full board is reviewing the draft letter, and the board has received substantive comments on the draft letter from outside parties. The work group is reviewing comments on the draft letter and expects to develop a revised draft letter for full board review and approval outlining the board’s findings and recommendations concerning the financial test. Another concern about the financial test relates to the threshold a company must meet to qualify for the test—a company must have at least $10 million in tangible net worth. EPA has not adjusted the standard since 1982 when the RCRA financial assurance regulations were implemented. The Environmental Financial Advisory Board subcommittee noted that the $10 million threshold may be inadequate and should either be recalibrated or have standards of proportionality. We believe that the $10 million standard is likely to no longer be appropriate given, for example, the rate of inflation since 1982. In addition, the financial test requires that EPA and state regulators have the financial skills to assess whether a company’s representation of its financial condition is reasonable. An EPA regional enforcement official said that the assessment of whether a company meets the financial test can be particularly difficult given that companies have an incentive to pass the test—therefore, companies may try to paint their financial position as “rosier” than it actually is to avoid having to pay for higher-cost financial assurance. (As recent court cases, such as those involving Enron and Worldcom, have shown, serious misstatements of financial position aimed at demonstrating strong financial position may occur for a number of other reasons as well—for example, to protect or improve the value of the corporate stock.) Because EPA and state staff who oversee the implementation of these mechanisms may not have sufficient expertise to provide the desired level of financial analysis, the Environmental Financial Advisory Board’s March 2005 draft letter to the EPA Administrator noted that the financial test may be better served if companies retained credit services to provide independent financial analysis. Moreover, in a March 2001 report, EPA’s Inspector General identified other factors that complicate overseeing the financial test. In this report, officials cited difficulties in predicting companies’ long-term financial viability. For example, in reviewing the financial assurances of a sample of hazardous waste facilities required to have financial assurances, the Inspector General found that some facilities that had established financial assurance through the corporate financial test no longer met the requirements of the test a year later. Other difficulties officials cited in overseeing the financial test included evaluating data from companies that have hazardous waste facilities in many states and factoring in the impact of mergers and acquisitions, among other things. In a 2003 paper summarizing its review of RCRA financial assurances, the Association of State and Territorial Solid Waste Management Officials reported that waste and remediation managers from various states believe that EPA should reconsider the financial test and corporate guarantee as financial assurance mechanisms due to the financial meltdown of Enron and many other publicized financial scandals of Fortune 500 companies with audited financial statements. The paper states that EPA’s position is that eliminating these financial assurances could add substantially to the cost of the financial assurance regulations. As table 2 shows, the corporate financial test and the corporate guarantee are the least costly financial assurances for companies to use, so eliminating them would increase compliance costs. At the same time, these two financial assurance mechanisms are the most costly for the government because of the high oversight costs associated with them, as discussed above, and because the government, rather than the companies, is carrying the default risk. In addition to the risks posed by the use of the corporate financial test and the corporate guarantee, the use of insurance polices as financial assurance has typically presented higher financial risk to the government than letters of credit, surety bonds, and trust funds. For example, concerns have been raised about the increased use of policies written by “captive” insurance companies—that is, by wholly-owned subsidiaries controlled by parent companies and established to insure the parent companies or other subsidiaries. In 2001, for example, EPA’s Office of Inspector General found that financial assurance provided by a “captive” company did not provide adequate assurance of funding for closure and post-closure activities at hazardous waste facilities. EPA acknowledges that the financial health and solvency of a captive insurance company may be closely connected to the financial condition of the company with environmental liabilities, and therefore, if the company faces financial difficulties, the insurer may also be in financial distress and not be able to cover claims made on its policies. The Congress has also raised questions about the use of insurance as financial assurance at solid waste landfills, which have a separate set of financial assurance regulations. A June 2000 House committee report directed EPA to conduct a study of financial assurance agreements at solid waste landfills to determine if sufficient safeguards have been properly maintained and future liabilities minimized. According to the EPA official responsible for preparing this report, the concerns that led to this mandate dealt largely with captive insurance. EPA’s draft report in response to the mandate was being reviewed within the agency as of June 2005; no expected issuance date has been announced yet. Because the report is still in draft form, EPA officials were not willing to discuss its findings or potential recommendations. Moreover, independent of issues associated with captive insurance policies, insurance policies covering corrective action or Superfund cleanups can require significant oversight on the part of regulators. For example, since insurance policies may contain exclusions that limit their coverage, the regulator must carefully review a policy being used as financial assurance to verify that it fully covers the anticipated environmental claims. Also, the regulator must remain aware of the insurer’s status—under current EPA requirements, the insurer is not required to inform the regulator if its license to operate is revoked or it becomes insolvent. In addition, EPA officials noted that insurers will sometimes include language in the policy that conflicts with EPA’s regulatory requirements, which may delay recovery on the policy. The Association of State and Territorial Solid Waste Management Officials has voiced concerns about the level of oversight required of insurance as financial assurance, and in 2003, recommended that EPA update its guidance on financial assurances, particularly its guidance on insurance issues, such as how to make claims on policies. In addition to the financial risks to the government resulting from the use of certain financial assurance mechanisms, as discussed above, several other financial risk factors affecting liable parties’ ability to fulfill their cleanup obligations make it all the more important that EPA or state regulators, if applicable, ensure that liable parties provide solid financial assurances that will be available when needed. These risk factors include (1) the financial histories of liable parties, (2) any existing agreements that have reduced the amounts businesses are required to pay on the basis of ability-to-pay analyses, and (3) the estimated total environmental liability of individual parties. When EPA or a state regulator agrees to a liable party’s use of a financial assurance mechanism, it would be prudent for the agency to consider these factors as well as the risk to the government associated with the mechanism itself. In some cases, EPA or state regulators have encountered individuals or companies with track records that indicate that they are unlikely to have the financial resources or the willingness to carry out their environmental cleanup responsibilities. The histories of these parties may indicate that they are at high risk of failing to comply with future requirements, such as cleanup requirements under the corrective action program. Parties that present such high risks to EPA and state regulators could be required to obtain strong financial assurances to ensure that their environmental responsibilities are fulfilled. Also, large liabilities—which may stem from one or more megasites either under Superfund and/or RCRA or from a series of smaller sites—expose EPA and taxpayers to significant financial risk, especially if there is only one or a few parties liable for the cleanups. In such cases, choosing financial assurance mechanisms that provide relatively low financial risk to the government—that is, that provide at least some actual funding—is particularly important. However, EPA and state staff overseeing financial assurances generally do not have information readily available about a company’s total environmental liabilities across the United States, nor would they typically have access to information about (1) environmental obligations a company may have in other countries or (2) the extent to which the company may be using the same financial assurance mechanism to back up numerous environmental obligations. As a result, these regulators may, for example, approve the financial test for financial assurance at a RCRA site or sites without considering a company’s liability for a large Superfund site in another state. Finally, for RCRA sites, typically an owner or operator is responsible for the cleanup. Similarly, at some Superfund sites, there may be few, even only one, liable parties. Along these lines, EPA enforcement officials said that strong financial assurances are particularly critical when a site’s cleanup costs are large, but the number of liable parties is small. At such sites, strong financial assurances are likely to be the only way to avoid having taxpayers pay for these cleanups should the liable party experience financial reverses, file for bankruptcy, or restructure in a way that leaves the party with insufficient assets to pay for the cleanup. EPA has conducted limited enforcement of its existing financial assurance requirements. As a result, the agency has not ensured that the parties responsible for Superfund and RCRA corrective action cleanups obtain and maintain the financial assurances they are required to provide to demonstrate their ability to meet these environmental obligations. In fact, the agency lacks basic information about its portfolio of financial assurances. That is, EPA does not have data on the financial assurances that businesses are required to have in place for Superfund and RCRA cleanups, such as the type of assurance required, the amount of financial assurance provided, and whether the financial assurance is still authorized or is in force. Further, in late 2003, one EPA regional office conducted an assessment of financial assurances for Superfund cleanup settlements negotiated in that region and found significant noncompliance with financial assurance requirements. Specifically, EPA officials found that only 30 percent of the liable parties subject to financial assurance requirements in Superfund settlements, consent decrees, and EPA cleanup orders were in compliance with these requirements. Overall, the responsible parties at 48 percent of these sites appeared to be out of compliance with relevant financial assurance requirements. In addition, the regional staff reported that 22 percent of the cases needed additional follow up and review because, among other things, EPA could not locate the financial assurance documents and thus could not determine whether the liable parties were in compliance with the financial assurance requirements. (In some cases, EPA had the responsibility for maintaining the financial assurance documents, and in others that responsibility had been delegated to state regulators.) The staff member leading the assessment reported that locating the original financial assurance documents within the region’s records was “painfully slow.” Moreover, EPA’s key databases for Superfund and RCRA do not contain data elements related to financial assurances. In addition, although EPA’s regional offices are responsible for ensuring compliance with Superfund settlement agreements, including financial assurance requirements, the regional offices have generally not tracked information on their portfolios of financial assurances supporting settlements for cleanups in their regions. For example, we asked several EPA regional offices to provide information on the Superfund settlements negotiated in their offices such as (1) the number of settlements backed by financial assurances and (2) the number, if any, not in compliance with this requirement. Regional EPA officials told us that information was not readily available, and that obtaining it would entail going back to each individual settlement agreement to identify the financial assurance mechanism, if any, and then determining the current status of the financial assurance. The situation with financial assurances under the RCRA corrective action program is more complex. While EPA has overall responsibility for implementing the act, and retains enforcement authority, it has authorized most states to administer the corrective action program. As a result, to obtain information on these financial assurances, EPA would have to request that the states gather this information and provide it to EPA. Lacking data on the financial assurances that are required, EPA cannot be assured that all appropriate financial assurances are in place and available, as needed. In addition, the data limitations preclude EPA and state officials from conducting other analyses and enforcement-related tasks, such as determining whether the financial assurances that a company provides will be adequate given the company’s cleanup liability across the nation and analyzing the effectiveness of the various types of financial assurance in providing funding for cleanups. Enforcement officials both at EPA headquarters and several regional offices acknowledged that the agency has often paid scant attention to oversight and enforcement of financial assurance requirements in cleanup settlements and cleanup orders. According to EPA officials, the agency’s focus in the Superfund program has been on the environmental issues associated with cleanups, such as ensuring that appropriate cleanup remedies are chosen and that the liable parties begin the agreed-upon cleanup work. Consequently, when EPA negotiates and enforces cleanup settlements, enforcing financial assurance requirements, including reviewing complex financial data about responsible parties, typically takes a back seat to environmental concerns. According to one regional attorney, there are a number of important issues to resolve in negotiating settlements, and ensuring that a strong financial assurance mechanism is in place often becomes a “B list” issue during negotiations. Moreover, one official noted that EPA tracks whether its regional enforcement officials reach a settlement with liable parties as a key measure of enforcement activity—but there is no such results-oriented measure concerning enforcement of financial assurances. In addition, the existing model language for Superfund settlements does not require that the financial assurance be obtained by the time the settlement is signed. Rather, the party agreeing to the settlement has 30 days after signing it to obtain financial assurance and notify EPA. This arrangement has precluded an assessment of the assurance before the settlement is signed. Once a Superfund settlement has been signed, enforcement of financial assurances—to ensure that they were actually obtained, are sufficient to cover anticipated cleanup costs, and remain in force—is likely to remain a low priority, according to some EPA enforcement officials. An EPA official explained that this enforcement responsibility typically falls to the remedial project manager, who has overall responsibility for the site cleanup. This remedial project manager’s expertise is typically in engineering and environmental cleanup issues, not financial matters such as determining whether a liable party’s corporate guarantee provides adequate protection against default on the party’s cleanup obligations. Moreover, if EPA discovers at some point that the liable party’s financial assurance is no longer adequate, EPA is often reluctant to insist that the company incur the additional cost of obtaining further financial assurance as long as the company is carrying out at least some of the cleanup work, according to some enforcement officials. In fact, EPA and Justice Department officials have noted that at times they are faced with this dilemma: whether to require companies to use some of their limited resources to obtain secure financial assurances versus applying those funds directly to the cleanups. EPA has begun to recognize that its limited enforcement of its financial assurance requirements for Superfund and RCRA cleanups, as well as these requirements for closure and post-closure activities at hazardous waste facilities, is exposing taxpayers to significant risk of having to pay cleanup costs at many current and future Superfund sites. As a result, EPA’s enforcement office has begun several initiatives concerning financial assurances: EPA has added financial assurances to its national enforcement priorities beginning in fiscal year 2006. EPA has taken steps to evaluate the addition of data elements, such as the type of financial assurance provided and the name of the company providing it, to its key databases for Superfund and RCRA programs. EPA estimates that the Superfund database’s revisions will be in place by the end of fiscal year 2005. The data elements are expected to be added prospectively, that is, EPA would add information about financial assurances in new Superfund settlements and consent decrees to the database as they are reached, but information about existing financial assurances would not likely be added. Because the RCRA database additions involve coordinating with states and tribes authorized to implement RCRA, they are expected to take longer, and no estimate of implementation date has been made. EPA has begun efforts to increase the expertise of officials who enforce its financial assurance requirements. For example, the agency has developed a course on financial assurance mechanisms for officials who enforce RCRA financial assurance requirements. In late 2004, EPA made available three cost-estimating tools to help regulators estimate the appropriate level of financial assurances needed in the RCRA corrective action program. EPA has also begun to fund training in the use of cost-estimating software for its staff and state agency personnel. In response to a recommendation in EPA’s April 2004 internal Superfund review, as discussed earlier, EPA has begun a study that, among other things, will assess the extent to which facilities that had been required to have financial assurances under RCRA’s hazardous waste program have become taxpayer-funded Superfund cleanups. Also, EPA’s Office of Inspector General initiated a review in late 2004 on the effectiveness of RCRA’s financial assurance requirements. In addition to financial assurances, EPA has other enforcement authorities available under certain circumstances to help obtain payments for cleanups. For example, EPA may in appropriate circumstances (1) seek, in cooperation with the appropriate federal agency, tax refund or other administrative offsets, which allow the federal government to redirect payments or tax refunds it owes businesses to federal agencies with claims against these businesses and (2) file liens on property for which the federal government has incurred expenses under the Superfund law. These authorities may be used regardless of whether a liable party is in bankruptcy. Under the bankruptcy code, offsets and these liens may be considered secured claims—that is, those the debtor must pay first—which can greatly increase the likelihood that EPA will recover at least some of its cleanup costs in bankruptcies. An administrative offset is a procedure allowing a federal agency to obtain monies owed to it by a party from payments that the federal government owes the same party, such as tax refunds or payments under government contracts. EPA officials noted an important advantage of offsets as opposed to claims in bankruptcy court: to the extent that the offsetting amount will cover the dollar amount of EPA’s claim, the claim will be paid in “full dollars.” In contrast, claims in bankruptcy court, as previously discussed, may result in a payment of only pennies on the dollar amount of the claim. According to EPA and Justice Department enforcement officials, the agency has obtained tax refund offsets in several bankruptcy cases and other administrative offsets in two cases in the past few years. EPA officials noted one such example: in July 2004, after United Airlines filed for bankruptcy protection, EPA reached a settlement with the company on its environmental liabilities that included a provision to recover $550,000 through an offset of a federal tax refund. EPA officials also described an instance in which they had not been successful in obtaining an offset. Officials in EPA’s Philadelphia office told us of their failed attempt to obtain an offset from Exide Technologies when it filed for bankruptcy reorganization in 2002. One of these officials estimated that the company had an environmental liability of about $80 million from more than 100 contaminated sites. EPA officials believed the company had significant government contracts and tried to identify those contracts and the amount the government owed the company at that time. However, these officials said they were unable to obtain this information in time—that is, before the government paid Exide. (Under the Prompt Payment Act, an agency acquiring property or services from a business concern must make payments by the required payment dates or pay an interest penalty to the business on the amount due, and thus information on pending government payments must be gathered quickly.) To gain the benefit of administrative offsets to help recover some cleanup costs, EPA would need to quickly identify government payments owed to bankrupt or financially distressed companies with environmental liabilities and process its offset claim before the government paid the contractor or vendor. To date, EPA has provided little guidance to its enforcement staff on how to use its offset authority in recovering cleanup costs. For example, EPA’s guidance for participating in bankruptcy cases mentions offsets but does not provide any instruction on the necessary steps in obtaining an offset, such as coordination that may be needed with the Internal Revenue Service for a tax refund offset. Similarly, in training sessions on bankruptcy issues for EPA attorneys that we observed in 2004, EPA and Justice Department bankruptcy experts encouraged the use of offsets, but did not include any specific information on how to obtain offsets or refer participants to any guidance on doing so. Particularly given the time-critical nature of any attempt to obtain offsets, procedures and guidance to staff to facilitate the use of offsets would both encourage staff to use these tools, when appropriate, and support their efforts to do so. For example, guidance to EPA staff on how to quickly obtain information on government contracts or grants may have helped them identify potential offsets for some environmental liabilities associated with bankruptcies. In addition, an agencywide process for identifying tax payments due to businesses would enable the agency to routinely identify whether businesses filing for bankruptcy that have environmental liabilities are owed any tax refunds. Under the Superfund law, EPA has a lien, or legal claim, on property if the government has incurred costs associated with cleanup at the property. According to a relevant House committee report, one purpose of the lien was to prevent the unjust enrichment of the responsible party, who might otherwise benefit from the rise in property value resulting from the property’s cleanup. According to EPA, liens can provide the agency with leverage in obtaining cleanup costs generally, and can also assist the agency in obtaining cleanup funds under bankruptcy proceedings because liens are classified as secured claims—the highest priority category for receiving payments from a debtor in a bankruptcy. Thus, a lien can greatly increase the likelihood that EPA will recover at least some of its cleanup costs in bankruptcy cases. However, to establish the priority of a property lien under the Superfund program among other secured parties and creditors, EPA must file notice of the lien (sometimes called “perfecting a lien”) in the appropriate governmental office in the state where the property is located. Importantly, the automatic stay provision under bankruptcy law generally prohibits filing or enforcing a lien after a debtor has filed for bankruptcy. In addition, the priority of property liens is typically based on their filing dates. Thus, it is to EPA’s advantage to file Superfund liens as soon as possible to secure EPA’s financial interest in them and to receive as high priority for that interest as possible. An example of the benefit liens can provide is a bankruptcy case cited by EPA in which the agency recovered $10 million in satisfaction of its property lien. (The property was sold for $24 million at an auction conducted by the bankruptcy court.) If, however, EPA does not routinely consider and analyze the use of liens at Superfund sites to protect the government’s financial interest where cost reimbursement may otherwise be difficult or impossible, the agency can miss opportunities to have status as a secured creditor in bankruptcy cases. In addition, having Superfund liens can also help EPA negotiate settlements with liable parties at Superfund sites, according to EPA. For example, according to EPA, the liens cover the entire property for which Superfund- related costs have been incurred, not just contaminated areas—and owners of some properties may wish to sell “clean” portions of their properties. Such owners would have an incentive to have the lien released, which would happen only if they conducted the cleanup or reimbursed EPA for cleanup costs. In fact, EPA has identified instances in which even the threat of filing a lien has produced agreements for payments with uncooperative parties. With filed liens, the agency may also become aware of assets businesses may wish to sell to affiliated parties, and which EPA could challenge under fraudulent transfer laws, because such transactions would need to be approved by the agency. Since the lien provision was added to the Superfund law in 1986, EPA has issued guidance to it staff on filing liens and has encouraged staff to do so. For example, in 2002, EPA’s Director of the Office of Site Remediation Enforcement issued a memorandum encouraging the filing of liens to secure response costs in Superfund cases. Also, in training sessions on bankruptcy issues for EPA enforcement attorneys, such as those we observed in 2004, EPA and Justice Department experts in bankruptcy encouraged these attorneys to file Superfund liens whenever possible. However, we found that EPA headquarters does not require that its regions report information to them on liens they have filed, and that overall the agency has little centralized information on such liens. For example, although the principal database used to manage the Superfund program contains data fields for such liens, an EPA official with expertise in this database said that the agency has little confidence in the completeness or accuracy of these fields. Also, the lien-related fields were added in the late 1990s, so liens filed before that time are not likely to be included in the national database. Thus, it is not clear whether EPA has made good use of its authority to file Superfund liens. In addition, it is not clear that the agency is consistently and timely aware of EPA property liens that it should pursue in bankruptcy cases. For example, EPA officials indicated that the agency generally relies on its enforcement attorneys to have knowledge of its Superfund liens at sites for which the attorneys have enforcement responsibility. However, the reliability of this informal system is questionable in light of such things as the often voluminous Superfund files we have observed—a wall of floor-to- ceiling shelves can be filled with files from just one case—staff changes over time, and the need for the relevant staff to be available when the notice of bankruptcy is circulated via email. In addition, agency guidance on bankruptcy cases does not specifically require staff to routinely determine, when reviewing notices of bankruptcy filings, whether EPA has filed a lien that could become a secured claim in the bankruptcy proceedings. Finally, we note that EPA officials highlighted the fact that lien filings are not included in the agency’s performance measures, and that greater attention can be expected to be given to those activities that are counted, such as reaching Superfund settlements. The need for EPA to fully use its existing authorities to execute the “polluter pays” principle underlying the Superfund and RCRA laws is even more compelling today than it was during the 1980s and 1990s when corporate taxes—largely assessed on businesses at risk for environmental pollution—provided about $1 billion a year for Superfund cleanups. Now, without revenue from Superfund taxes, the cleanup burden has increasingly shifted to the general public—and at a time when large federal deficits are likely to constrain EPA’s ability to obtain such funding for these cleanups. In addition, over time, businesses have become more sophisticated in using the limited liability principle to protect their assets by separating them from their liabilities. They use the traditional corporate parent/subsidiary structure as well as relatively new business forms— limited liability companies and partnerships—often in complex, multilayered organizational structures. The result is that businesses of all sizes can easily limit the amounts they may be required to pay for environmental cleanups under Superfund and RCRA. Compounding the problem, from EPA’s perspective, is the long-term nature of many of the cleanups, which provides businesses with ample time to implement complex asset protection plans. Finally, it has become more common and acceptable for businesses to use the bankruptcy courts as a reorganization tool that enables businesses to emerge with discharged or reduced environmental liabilities. Collectively, these factors present serious challenges to EPA in attempting to enforce environmental laws and to ensure that polluters pay for cleanups. For example, the ease with which companies can protect their assets can actually encourage businesses to take more risks in their operations, thereby increasing the risks of environmental contamination. Importantly, this situation also presents a significant management challenge for EPA in determining whether businesses have resources available to meet their environmental obligations. These challenges can seriously hamper EPA’s ability to achieve its primary mission of protecting human health and the environment because they present formidable obstacles to obtaining the funding needed for cleanups. That is, it is increasingly difficult for EPA to obtain funding to clean up not only existing Superfund sites but also those still in the Superfund pipeline. Thus, we believe it is imperative for EPA to increase its focus on financial management and to fully use its existing authorities to better ensure that those businesses that cause pollution also pay to have their contaminated sites cleaned up. In this regard, EPA has not used its authority under the Superfund law to require businesses that handle hazardous substances to provide financial assurances covering existing and potential cleanups. This statutory mandate recognizes that businesses likely to cause environmental contamination and endanger public health can reasonably be expected to incur a business cost in order to ensure that they will have the financial wherewithal to pay for spills and other contamination, whenever they may occur, consistent with the degree of risk their operations pose to public health and the environment. Under this statutory mandate, EPA is to require, as appropriate, financial assurances from businesses to protect public health and the environment prospectively. This requirement may be viewed as akin to mortgage companies’ requirements that borrowers provide homeowners insurance to protect the value of the assets against possible damage, except that this requirement is not directed at all businesses—it is directed at those at risk for contaminating the environment. Importantly, using this authority would help to close gaps in EPA’s existing financial assurance requirements: it would require some businesses not subject to RCRA’s financial assurance coverage, such as producers of certain mining wastes that have caused enormous environmental harm, to obtain financial assurance because of the environmental problems their operations are likely to continue to cause. It would also close the gap that exists under RCRA’s financial assurance requirements, which generally extend to businesses that treat, store, or dispose of hazardous waste, but not to businesses that generate hazardous waste, even though they may be at high risk for environmental problems, such as chemical spills. In 1980, when the Superfund financial assurance requirement was enacted by the Congress, it required EPA to first identify the classes of facilities with the highest risk of harm. This task is much easier today because EPA and the states now have 25 years of experience with Superfund and 29 years with RCRA. We believe EPA can expeditiously implement the requirement to identify those industries with the highest risk of environmental harm and establish appropriate risk-based financial assurance requirements for them. For example, EPA should be able to gather relevant information from Superfund and RCRA program data, studies, and the many officials involved with these programs over the years, among other sources, to identify those industries that pose high levels of environmental risk. Further, to ensure that financial assurances the agency requires under the Superfund and RCRA corrective action programs actually provide funding for cleanups in the event the liable parties default on their environmental obligations, it is critically important that EPA effectively oversee and enforce the financial assurances that businesses provide to the agency. The fact that EPA currently cannot even readily identify the financial assurances that should be in force is a clear indication of inadequate oversight and enforcement. As a result, there is an increased risk that taxpayers, rather than the parties responsible for the contamination, will ultimately have to pay for the cleanups of contaminated sites under Superfund and RCRA. Although EPA has begun some efforts to increase its oversight and enforcement of financial assurances, the agency will need to sustain and increase such efforts if financial assurances are to achieve their intended goal of ensuring that responsible parties, not U.S. taxpayers, pay to clean up hazardous waste sites. Also, we believe that EPA should evaluate the degree of financial risk and the oversight costs it is appropriate for the agency to bear. Fundamentally, it is a question of whether the industries that pose environmental risk or the government charged with protecting the environment should carry the financial risk for the contamination that the industries may cause. Considering the often very long-term nature of the cleanups—during which time it would be reasonable to expect businesses to set aside increased resources—as well as the resources and skills necessary to oversee the unsecured financial assurances, continuing to, in effect, subsidize businesses by accepting unsecured assurances may be a luxury the government can no longer afford. More specifically, in its evaluation, EPA should consider the different financial risks that the various financial assurances pose to the government. This is especially important in light of the problems that we, the EPA Inspector General, state regulators, and others have identified, particularly with respect to the corporate financial test, corporate guarantee, and captive insurance. For example, to effectively oversee some of the financial assurances, EPA staff—and state staff handling RCRA financial assurances for EPA—must have a high level of expertise in financial management and insurance, among other fields. However, EPA has not taken into account either the variations in number of staff or levels of expertise needed that are associated with overseeing and enforcing the various financial assurances. Doing so, however, could provide EPA with the opportunity to both minimize the costs the government needs to bear to effectively oversee and enforce its financial assurance portfolio and reduce the government’s financial risk for environmental cleanups. For example, when faced with the trade-off between allocating staffing resources to oversee unsecured financial assurances and meeting other agency responsibilities, BLM decided to no longer accept corporate guarantees, in part because of the oversight challenges they present. In so doing, BLM shifted more of the financial risk to the businesses they regulate who have to purchase financial assurances from independent third parties, such as banks. In addition to financial assurances, greater use of other enforcement authorities, such as offsets and Superfund liens, could help EPA recover more costs from parties liable for environmental cleanups in some cases. Although offset authorities are limited to situations in which the government owes the company a tax refund or some other payment, a greater willingness by EPA to use these authorities—and to establish procedures and provide direction to staff in how to use these authorities— could help the government better ensure that parties responsible for pollution pay the associated cleanup costs to the maximum extent practicable. For example, when liable parties are unwilling to fulfill their financial obligations for cleanups, EPA officials should routinely explore whether tax offsets may be available. Staff should be provided with policies and procedures detailing the steps that need to be taken to use these enforcement tools effectively. Finally, companies with environmental liabilities that file for bankruptcy present another set of management challenges to EPA. Under its current process for identifying and reviewing bankruptcies, the agency cannot be confident that companies with EPA liabilities are held responsible for their cleanup obligations to the maximum extent practicable because the agency cannot ensure that it has identified (1) those bankruptcies for which it should request the Justice Department to file claims with the bankruptcy courts for cleanup funds and (2) any existing rights the agency has that can give its bankruptcy claims a priority status, such as liens on Superfund properties, which significantly improves the agency’s chances of recovering funds under bankruptcy proceedings. Importantly, EPA also needs to review the specific sites identified in bankruptcy proceedings for purposes other than filing claims. One such purpose is to help ensure that discharges for businesses reorganizing under bankruptcy proceedings are not approved for contaminated sites that EPA has not been previously aware of. To its credit, EPA has established a bankruptcy work group that seeks to identify relevant bankruptcy filings to pursue and bankruptcy actions to monitor, such as notices to abandon property. However, the process the agency uses to identify relevant bankruptcy cases and actions is informal and essentially undocumented. As a result, it is not clear whether EPA is devoting sufficient time and resources to maximize the cleanup funds it can obtain under bankruptcy proceedings and to ensure that businesses are not receiving discharges of environmental liabilities inappropriately. We believe that EPA should build on the existing informal processes the agency is using and formalize and document its process for identifying relevant bankruptcy proceedings. In addition, we believe that EPA guidance on bankruptcy cases should be revised to emphasize some important actions that are not sufficiently addressed in existing guidance, such as routinely identifying contaminated sites identified in bankruptcy filings about which EPA is not familiar so that the agency can take appropriate steps to ensure that courts do not inappropriately discharge such environmental liabilities. To close gaps in financial assurance coverage that expose the government to significant financial risk for costly environmental cleanups, the EPA Administrator should expeditiously implement the statutory mandate under Superfund to develop financial assurance regulations for businesses handling hazardous substances, first addressing those businesses EPA believes pose the highest level of risk of environmental contamination, as the statute requires. In addition, to better ensure that the financial assurances EPA does require under the Superfund and RCRA corrective action programs provide sufficient funds for cleanups in the event liable parties do not fulfill their environmental obligations, EPA should enhance its efforts to manage and enforce the financial assurance requirements for Superfund and RCRA corrective action cleanups by taking the following actions: Evaluate the financial assurances the agency accepts in light of such factors as the financial risks EPA faces if liable parties do not meet their cleanup obligations; the varying financial risks posed by the individual financial assurance mechanisms; the agency’s capacity to effectively oversee the various financial assurance mechanisms—in particular, the expertise of staff (federal and state) and the number of staff; the information gaps the agency faces in overseeing the various financial assurances; and the concerns about certain financial assurances, such as the corporate financial tests, corporate guarantees, and captive insurance, that have been brought to the agency’s attention by state regulators, the EPA Inspector General, and others. If EPA continues to accept the corporate financial tests and corporate guarantees as financial assurance in these programs, it should revise and update its financial tests to address the deficiencies identified by the EPA Inspector General and others. Implement changes to Superfund and RCRA databases to support the efficient identification of EPA’s portfolio of financial assurances and populate these databases with information on all financial assurances that liable parties should have in force, developing quality controls to ensure data reliability. Develop a strategy to effectively oversee the agency and state portfolios of financial assurances to ensure that all required financial assurances are in place and sufficient in the event the related businesses encounter financial difficulties, including bankruptcy. Such a strategy should include ensuring that adequate staffing resources with relevant expertise are available. Require that financial assurances be in place before EPA and liable parties finalize Superfund settlement agreements. In addition, to better ensure that EPA holds liable parties responsible for their cleanup obligations to the maximum extent practicable, the agency should seek opportunities to more fully use its enforcement tools, particularly tax and other offsets, and provide specific guidance to their staff on how and when to use these tools. For example, EPA should routinely take advantage of tax offsets when liable parties are not meeting their obligations—not just when parties file for bankruptcy. To better ensure that EPA identifies relevant bankruptcy filings to pursue and bankruptcy actions to monitor, EPA should develop a formal process for monitoring bankruptcy proceedings and maintain data on bankruptcy filings reviewed, for example using an EPA Intranet site that would be readily available to all relevant staff. Finally, we recommend that EPA revise and update its guidance on participation in bankruptcy cases to more clearly identify some actions needed to better protect the government’s interest, such as steps to take to better ensure that the courts do not inappropriately discharge environmental liabilities and to specify that staff evaluating new bankruptcy filings should routinely determine whether EPA has any existing liens related to the filings. We provided EPA with a draft of this report for review and comment. In commenting on the draft, EPA generally agreed with many of the recommendations and said the agency will further evaluate its response to others. Appendix III contains the full text of the agency’s comments and our responses. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to the Administrator, EPA; the Attorney General, Department of Justice; the Director, Office of Management and Budget; appropriate congressional committees; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. GAO was asked to (1) determine how many businesses with liability under federal law for environmental cleanups have declared bankruptcy and how many such cases the Justice Department has pursued in bankruptcy court; (2) identify key challenges that EPA faces in holding bankrupt and other financially distressed businesses responsible for their cleanup obligations; and (3) identify actions EPA could take, if any, to better ensure that bankrupt and other financially distressed businesses pay the costs of cleaning up their contaminated sites to the maximum extent practicable. To determine how many businesses with liability under federal law for hazardous waste cleanup costs have declared bankruptcy, we obtained bankruptcy case filing information from the Administrative Office of the U.S. Courts, which compiles data on the number of bankruptcy filings. Specifically, we obtained bankruptcy case filing information on the number of business bankruptcy filings under Chapters 7, 11, and 13 of the bankruptcy code for fiscal years 1998 through 2003. While the bankruptcy courts collect data on the number of businesses that file for bankruptcy each year and the Administrative Office of the U.S. Courts maintains these data in a national database, neither the courts, EPA, nor private providers of business data collect information on how many of these businesses have environmental liabilities. As a result, we were not able to report on the number of business bankruptcies with hazardous waste liabilities. To determine how many bankruptcy cases with liability under federal law the Justice Department has pursued in bankruptcy court on behalf of EPA, we spoke with officials from the Justice Department about the cases it received from EPA to determine which cases the department had pursued. We obtained data on the cases the Justice Department pursued on behalf of EPA where a proof of claim was filed for fiscal years 1998 through 2003. To identify key challenges that EPA faces in holding bankrupt and other financially distressed businesses responsible for their cleanup obligations and to identify actions EPA could take to better ensure that bankrupt and other financially distressed businesses pay the costs of cleaning up their hazardous waste sites to the maximum extent practicable, we reviewed federal statutes and policies associated with hazardous waste management and cleanup, the federal bankruptcy code and procedures, and academic and professional literature addressing the intersection of environmental and bankruptcy law, corporate limited liability, forms of business organization, and asset management. We also interviewed enforcement officials from EPA headquarters and its 10 regional offices about how the agency identifies, pursues, and recovers federal environmental liabilities from financially distressed or bankrupt businesses; the challenges EPA faces in these tasks; and the extent to which the agency has used available enforcement tools in this effort. Finally, we attended EPA-sponsored training sessions on RCRA closure and post-closure financial assurances and on bankruptcy-related issues for EPA attorneys in order to learn more about these challenges as well as the financial assurances and other enforcement tools and procedures available to EPA to address these challenges. We performed our work between September 2003 and July 2005 in accordance with generally accepted government auditing standards. Congress enacted the Comprehensive Environmental Compensation, Response, and Liability Act of 1980 (CERCLA, or the Superfund law), calling for, among other things, EPA to develop financial assurance requirements for businesses handling hazardous substances to demonstrate their ability to pay for environmental cleanup costs (CERCLA Section 108(b)(1)). EPA and its contractors produced issue papers on such topics as gaps in existing financial assurance requirements, the definition of "facility," and data sources for classifying facilities. In May, EPA published a Federal Register notice announcing the beginning of a process of identifying facility classes and seeking public comment on several issues related to identifying risk-based classes of industries and facilities handling hazardous substances. In November, the Director of EPA’s Office of Solid Waste informed the Assistant Administrator for the Office of Solid Waste and Emergency Response that work on the facility classification effort was being halted because of a lack of contract funding and staff availability. In December, the statutory deadline passed for EPA to identify classes of facilities for which regulations would first be developed. EPA revisited the Superfund financial assurance requirements as part of a broader review of the Superfund program spurred by the 1986 amendments to the Superfund law. According to EPA officials, the agency developed recommendations to the Assistant Administrator for the Office of Solid Waste and Emergency Response for developing the regulations. However, EPA never acted upon these recommendations. An EPA internal review of the Superfund program recommended that the Office of Solid Waste and Emergency Response study whether promulgating financial assurance regulations under CERCLA could reduce Superfund liabilities for facilities not covered under RCRA financial assurance requirements. In response, EPA created a work group that is collecting and evaluating information on the types of facilities that have become Superfund National Priorities List sites as well as the industries represented among these sites. The following are GAO’s comments on the Environmental Protection Agency’s letter dated July 14, 2005. 1. We acknowledge and commend EPA for the actions the agency has initiated and for its plan to develop and implement other actions to improve compliance with the enforcement of financial assurance requirements, as EPA highlights in this and the next three paragraphs. The management challenges EPA faces in this regard are complex, but the potential benefits the agency can receive from effective financial assurances are substantial. We believe that if EPA implements our recommendations as part of its compliance and enforcement efforts focusing on financial assurance, EPA’s ability to hold liable parties responsible for their environmental cleanup obligations will be substantially improved. 2. Although we obtained information about region III’s review of financial assurances, we did not cite it in our report for several reasons. For example, unlike the other regional review, the region III review is not a compliance audit of financial assurances in Superfund settlements. As such, this review does not identify either Superfund settlement agreements that do not include financial assurances or the number of sites that do not have settlements in place. In addition, the reported financial impact on the government for the sites in region III’s review is preliminary and will remain so until the cleanups at the sites reviewed are completed because the financial assurances may not reflect the actual cleanup costs. For example, as discussed earlier, EPA often settles for less than the full cleanup cost as a result of equitable factors or ability-to-pay issues. In addition, the financial assurances may relate to work to identify the potential cleanup remedies and not to the cost of the cleanup, which may not yet be known. An example of a case included in the study that substantially understates the negative impact on the Superfund and the taxpayers is the Metachem/Standard Chlorine case discussed in our report. According to the official who conducted this review, while the review identifies a loss of $3.75 million associated with the Metachem site, EPA expects the government will have to spend about $100 million to clean up the site. 3. We disagree with EPA’s view that the report does not highlight the preventive aspect of financial assurance. In discussing the purpose of financial assurance, the draft and final reports point out that the fact that the parties responsible for the contamination are also responsible for cleaning it up encourages businesses to adopt responsible environmental practices. While EPA’s comments acknowledging the benefits of prospective financial assurance are limited to the RCRA closure and post-closure programs, we hope that the agency recognizes that these same preventive benefits can be more broadly attained by implementing the financial assurance requirements mandated by the Superfund law under section 108(b), which also provide for prospective financial assurances from businesses at risk for environmental contamination. 4. EPA’s comment suggests that the agency’s enforcement options are limited under its RCRA corrective action and Superfund programs because the agency has not developed financial assurance regulations for these programs. If this is the case, EPA should seek to correct this situation as it develops specific goals to address financial assurance as a national enforcement priority. 5. We have revised the final report to reflect that under EPA’s current regulations for financial assurance for closure and post-closure, facility owners and operators may choose any of the permissible mechanisms, as long as the mechanism meets the regulatory standards. However, these regulations do not apply to the Superfund and RCRA corrective action programs, and therefore do not constrain EPA’s authority to accept or decline a proffered financial assurance mechanism related to a cleanup under these programs. Similarly, with respect to insurance, the RCRA regulations EPA cites apply only to the closure and post- closure programs. Thus, for Superfund and RCRA corrective action, regulatory vigilance over the terms of the policies is still necessary. 6. The Superfund law requires EPA to develop financial assurance regulations for classes of facilities that pose a risk for environmental contamination, starting with those that pose the “highest level of risk of injury.” This requirement is not, as EPA’s comments suggest, limited to those that pose the highest risk for financial assurance failure. Our recommendation is for EPA to comply with the requirements in the Superfund statute. In its comments, EPA misstates the GAO recommendation by focusing on classes of facilities at risk for financial assurance failure. We are concerned that the agency is narrowly construing a broad statutory mandate that requires the agency to establish, as appropriate, prospective financial assurance requirements for entities at risk for environmental pollution. Further, EPA may miss the forest for the trees by focusing too narrowly on its ongoing study of NPL Superfund sites as a basis or rationale for implementing the section 108(b) mandate. The universe of businesses at risk for environmental contamination is much broader than Superfund NPL sites—for example, NPL sites represent about 10 percent of contaminated sites identified in the Superfund database. Finally, we did not conclude, as EPA asserts, that EPA should pursue section 108(b) rule makings to the exclusion of other options. Nonetheless, we reject any assertion by EPA that implementing section 108(b) is optional. EPA is required to carry out the terms of the statute, and nothing in section 108(b) authorizes EPA to determine that such actions are unnecessary. By passing section 108(b), the Congress has determined that its provisions are necessary; should EPA believe otherwise, it must seek legislative relief. During the 25 years section 108(b) has been in effect, EPA has not sought amendment or repeal of the requirement. 7. EPA’s comment that it will not consider whether to implement section 108(b) until certain evaluations are complete indicates that it views implementation of the statutory mandate under the Superfund law to establish financial assurance for classes of facilities at high risk for environmental contamination as optional. However, as noted above, it is not. We believe the efforts of the Environmental Financial Advisory Board (EFAB) and EPA under the 120-day study may provide important and useful information to aid EPA’s implementation of section 108(b) and the agency’s other financial assurance responsibilities. However, these efforts cannot provide a basis for the agency to simply decline to carry out the actions required under section 108(b). 8. Our report provides some general information and issues about insurance as one of the approved financial assurance mechanisms. However, the scope of our work did not include an analysis of the types of insurance products currently available or of all of EPA’s actions regarding insurance products. Instead, our work focused on issues and concerns about some insurance products identified by the EPA Inspector General and others. 9. In response to the questions posed by our requesters, we report the number of business bankruptcies and inform readers that information to identify how many of these bankruptcies involved environmental liabilities does not exist. We also report, as requested, on the number of bankruptcy cases that EPA and the Justice Department have pursued in bankruptcy court. EPA believes that this information in the first section of the report will lead readers to conclude that the agency is not willing to pursue more environmental bankruptcy cases. We disagree. For example, we report that without information on the number of bankruptcy cases involving environmental liabilities, EPA’s efforts in identifying and pursuing relevant bankruptcies cannot be evaluated. Further, our report provides information on some of the reasons EPA may choose not to pursue bankruptcy cases in court—for example, many chapter 7 bankruptcies involve businesses with few or no assets. 10. Our report accurately reflects that EPA does not maintain information on bankruptcies it does not pursue. EPA’s comments show that only one region maintains such data. Further, while EPA states that there have been discussions concerning collecting these data agencywide, the agency does not report a decision or plan to do so. 11. The fact that one region is documenting its decisions regarding bankruptcy cases does not demonstrate that the agency as a whole is taking steps to better track and document all bankruptcies of which it receives notice. We note that expanding the use agencywide of the close-out memo used by region III is the type of action/documentation we had in mind in recommending that EPA develop a formal process for monitoring bankruptcy proceedings and maintaining data on bankruptcy filings reviewed. In addition to the individual named above, Christine Fishkin, Assistant Director; Nancy Crothers; Richard Johnson; Les Mahagan; and Susan Swearingen made key contributions to this report. Also, Catherine Hurley; William O. Jenkins, Jr.; Jean McSween; Jamie Meuwissen; Mary Mohiyuddin; Jennifer Popovic; Aaron Shiffrin; and Gary Stofko made important contributions. Finally, Greg Carroll; Terrance N. Horner, Jr.; Mike Kaufman; Jerry Laudermilk; Karla Springer; and Joseph D. Thompson provided important assistance during final report review. Superfund Program: Breakdown of Appropriations Data. GAO-04-787R. Washington, D.C.: May 14, 2004. Superfund Program: Updated Appropriation and Expenditure Data. GAO-04-475R. Washington, D.C.: February 18, 2004. Superfund Program: Current Status and Future Fiscal Challenges. GAO- 03-850. Washington, D.C.: July 31, 2003. Hazardous Materials: EPA’s Cleanup of Asbestos in Libby, Montana, and Related Actions to Address Asbestos-Contaminated Materials. GAO-03- 469. Washington, D.C.: April 14, 2003. Superfund: Half the Sites Have All Cleanup Remedies in Place or Completed. GAO/RCED-99-245. Washington, D.C.: July 30, 1999. Superfund: Progress Made by EPA and Other Federal Agencies to Resolve Program Management Issues. GAO/RCED-99-111. Washington, D.C.: April 29, 1999. Hazardous Waste: Progress under the Corrective Action Program Is Limited, but New Initiatives May Accelerate Cleanups. GAO/RCED-98-3. Washington, D.C.: October 21, 1997. Superfund: Duration of the Cleanup Process at Hazardous Waste Sites on the National Priorities List. GAO/RCED-97-238R. Washington, D.C.: September 24, 1997. Superfund: Number of Potentially Responsible Parties at Superfund Sites Is Difficult to Determine. GAO/RCED-96-75. Washington, D.C.: March 27, 1996. Superfund: EPA Has Opportunities to Increase Recoveries of Costs. GAO/RCED-94-196. Washington, D.C.: September 28, 1994. Hazardous Waste: An Update on the Cost and Availability of Pollution Insurance. GAO/PEMD-94-16. Washington, D.C.: April 5, 1994. Superfund: More Settlement Authority and EPA Controls Could Increase Cost Recovery. GAO/RCED-91-144. Washington, D.C.: July 18, 1991. Hazardous Waste: Funding of Postclosure Liabilities Remains Uncertain. GAO/RCED-90-64. Washington, D.C.: June 1, 1990. Superfund: A More Vigorous and Better Managed Enforcement Program Is Needed. GAO/RCED-90-22. Washington, D.C.: December 14, 1989. Hazardous Waste: Environmental Safeguards Jeopardized When Facilities Cease Operating. GAO/RCED-86-77. Washington, D.C.: Feb. 11, 1986. | The burden of cleaning up Superfund and other hazardous waste sites is increasingly shifting to taxpayers, particularly since businesses handling hazardous substances are no longer taxed under Superfund and the backlog of sites needing cleanup is growing. While key environmental laws rely on the "polluter pays" principle, the extent to which liable parties cease operations or restructure--such as through bankruptcy--can directly affect the cleanup costs faced by taxpayers. GAO was asked to (1) determine how many businesses with liability under federal law for environmental cleanups have declared bankruptcy, and how many such cases the government has pursued in bankruptcy court; (2) identify challenges the Environmental Protection Agency (EPA) faces in holding bankrupt and other financially distressed businesses responsible for their cleanup obligations; and (3) identify actions EPA could take to better ensure that such businesses pay for their cleanups. While more than 231,000 businesses operating in the United States filed for bankruptcy in fiscal years 1998 through 2003, the extent to which these businesses had environmental liabilities is not known because neither the federal government nor other sources collect this information. Information on bankrupt businesses with federal environmental liabilities is limited to data on the bankruptcy cases that the Justice Department has pursued in court on behalf of EPA. In that regard, the Justice Department initiated 136 such cases from 1998 through 2003. In seeking to hold liable businesses responsible for their environmental cleanup obligations, EPA faces significant challenges that often stem from the differing goals of environmental laws that hold polluting businesses liable for cleanup costs and other laws that, in some cases, allow businesses to limit or avoid responsibility for these liabilities. For example, businesses can legally organize or restructure in ways that can limit their future expenditures for cleanups by, for example, separating their assets from their liabilities using subsidiaries. While many such actions are legal, transferring assets to limit liability may violate federal law in some cases. However, such cases are difficult for EPA to identify and for the Justice Department to prosecute successfully. In addition, bankruptcy law presents a number of challenges to EPA's ability to hold parties responsible for their cleanup obligations, challenges that are largely related to the law's intent to give debtors a fresh start. Moreover, by the time a business files for bankruptcy, it may have few, if any, assets remaining to distribute among creditors. The bankruptcy process also poses procedural and informational challenges for EPA. For example, EPA lacks timely, complete, and reliable information on the thousands of businesses filing for bankruptcy each year. Notwithstanding these challenges, EPA could better ensure that bankrupt and other financially distressed businesses meet their cleanup obligations by making greater use of existing authorities. For example, EPA has not implemented a 1980 statutory mandate under Superfund to require businesses handling hazardous substances to demonstrate their ability to pay for potential environmental cleanups--that is, to provide financial assurances. EPA has cited competing priorities and lack of funds as reasons for not implementing this mandate, but its inaction has exposed the Superfund program and U.S. taxpayers to potentially enormous cleanup costs at gold, lead, and other mining sites and at other industrial operations, such as metal-plating businesses. Also, EPA has done little to ensure that businesses comply with its existing financial assurance requirements in cleanup agreements and orders. Greater oversight and enforcement of financial assurances would better guarantee that cleanup funds will be available if needed. Also, greater use of other existing authorities--such as tax offsets, which allow the government to redirect tax refunds it owes businesses to agencies with claims against them--could produce additional payments for cleanups from financially distressed businesses. |
GPRAMA requires agencies to publicly report on how they are ensuring the accuracy and reliability of the performance information they use to measure progress towards APGs and performance goals. Priority Goals and Performance Measures: GPRAMA requires agencies to identify their highest priority performance goals as APGs and have ambitious targets for these APGs that can be achieved within 2 years. measures to track the progress they are making on achieving their APGs or identify alternative ways of measuring progress, such as milestones for completing major deliverables for the APG (for more information on the measures and milestones the selected agencies identified, see appendix I). 31 U.S.C. § 1120(b). GPRAMA uses the term “measured values” instead of performance data. We define verification as the assessment of completeness, accuracy, consistency, timeliness, and related quality control practices for performance information. We define validation as the assessment of whether performance information is appropriate for the performance measure. For more information, see, GAO, Performance Plans: Selected Approaches for Verification and Validation of Agency Performance Information, GAO/GGD-99-139 (Washington, D.C.: July 30, 1999). the level of accuracy required for the intended use of the data; any limitations to the data at the required level of accuracy; and how the agency will compensate for such limitations (if needed) to reach the required level of accuracy. GPRAMA requires agencies to provide information to OMB that addresses all five requirements for each of their APGs for publication on a website (Performance.gov). Agencies also must address all five requirements for performance goals in their performance plans and reports. GPRAMA states that Performance.gov shall consolidate information about each APG, thereby making this information readily accessible to the public, members of Congress, and congressional committees. GPRAMA makes OMB responsible for Performance.gov and requires agencies to provide OMB with quarterly updates on their APGs, including how they are ensuring the quality of performance information, for publication on Performance.gov. Further, GPRAMA continues transparency requirements set in GPRA that require agencies to publish annual performance plans and reports (see text box). While GPRAMA requires certain information to be reported in performance plans and certain other information to be reported in performance reports, the Reports Consolidation Act of 2000 authorizes agencies–with the concurrence of OMB–to consolidate performance plans and reports into a single publication that covers past actual and future planned performance. GPRAMA’s Requirements for Agencies’ Annual Performance Plans and Reports Performance plans should identify the planned level of performance for the current fiscal year and the next fiscal year, explain how the agency will ensure the accuracy and reliability of its performance information, identify the agency’s priority goals (APGs), and be published every February, concurrent with the President’s Budget. summarize the actual level of performance agencies achieved during the previous five fiscal years, explain how the agency ensures the accuracy and reliability of its performance information, and be published every February. Guidance and Information Sharing on Implementing GPRAMA: OMB provides guidance to agencies in Circular A-11 on how to implement GPRAMA. OMB updates A-11 annually, and the most recent update was published in June 2015. GPRAMA also established in law an interagency council–the PIC–chaired by OMB and composed of agency PIOs to facilitate the exchange of useful practices to strengthen agency performance management, such as through cross-agency working groups. The selected 23 APGs we reviewed are intended to drive progress in important and complex areas, such as assisting veterans, addressing climate change, and protecting workers. Given the significance and complexity of many APGs, congressional and public understanding regarding how federal agencies are measuring and assessing progress toward these goals is important. GPRAMA requires agencies to publicly report on how they are ensuring the accuracy and reliability of the performance information they use to measure progress towards these APGs. However, our review found that overall, it would be challenging for Congress and the public to understand how the selected agencies are ensuring that the performance information they report for their 23 APGs is accurate and reliable–that is, suitable for making judgments about agency progress or decisions for different courses of action. We found limited information on Performance.gov on the quality of performance information used to assess progress on the six selected agencies’ 23 APGs. While each agency has a section dedicated to its priority goals on Performance.gov, there is no place on the website that is set aside to discuss the quality of performance information for each APG. The six agencies we reviewed used various sections of Performance.gov to discuss some of the performance information quality requirements for APGs. But, none of the agencies addressed all five GPRAMA requirements for their individual APGs. Moreover, while we found hyperlinks from Performance.gov to the selected agencies’ performance plans and reports, there was no explanation on Performance.gov of where to find performance information quality discussions in these plans and reports. We discussed our preliminary findings with OMB staff in January 2015. In response, OMB updated its A-11 guidance in June 2015 to direct agencies to either provide information for publication on Performance.gov of how they are ensuring the quality of performance information for their APGs, or provide a hyperlink from Performance.gov to an appendix in their performance report that discusses the quality of their performance information. OMB staff stated that this information will likely not be available until agencies start reporting on the next set of APGs (for fiscal years 2016 and 2017). This is because OMB will need to update a template that agencies complete for their Performance.gov updates. OMB staff confirmed in July 2015 that they are still using a version of this template that they provided to us in January 2015 that has not yet been updated to reflect this change. Five of the six agencies’ performance plans and reports we reviewed did not describe how they ensured the quality of performance information for their individual APGs. On the other hand, all six agencies did describe how they ensured the quality of their performance information overall. Of the 23 APGs in our sample, we could only find performance information quality discussions that addressed all five of the GPRAMA requirements for 3 APGs, which belonged to the Department of Homeland Security. The U.S. Department of Agriculture (USDA) Provided Some Information for How Performance Information Quality Is Ensured For One of Three APGs, but Did Not Address All Requirements USDA identified its APGs and briefly summarized the results achieved for each APG in its performance report for fiscal year 2014. Further, USDA’s performance report provided data quality discussion for each of the performance measures presented in the 2014 report, which included the two performance measures USDA used to measure progress on its Reduce the Number of Foodborne Salmonella Illnesses APG. However, USDA did not address all of the GPRAMA requirements for this APG or its two other APGs, as shown by table 1. For the Reduce the Number of Foodborne Salmonella Illnesses APG, USDA addressed two of the five GPRAMA requirements. For example, USDA identified the Centers for Disease Control and Prevention (CDC) as the source of the data measuring the number of illnesses from products USDA’s Food Safety and Inspection Service regulates. USDA also noted that CDC receives information from state and local health agencies concerning outbreaks of illnesses. USDA acknowledges that the quality of the data can vary by reporting agency, which is an example of identifying a potential limitation. While the fiscal year 2014 performance report did not explain how USDA and CDC are compensating for this limitation, USDA did provide a hyperlink in its performance report to a CDC web page that provided more detailed information about tracking and reporting of foodborne illnesses. Further, while USDA described a number of steps it is taking to reduce illnesses and detect contamination in food products, USDA did not explain to external audiences what level of accuracy it requires to make decisions related to this APG. Our prior work has identified improving oversight of food safety as a high-risk area, emphasizing the need to improve planning and collaboration among USDA and other federal food safety agencies. This makes it important for USDA to expand its performance information quality discussion and address all requirements for this APG. USDA’s performance plans and report did not address any of the five GPRAMA requirements regarding the quality of the performance information for its two other APGs: Create New Economic Opportunities and Improve the Health of Our Nation’s Soils. The lack of information related to the two other APGs makes it more challenging for Congress and the public to understand how USDA is ensuring the quality of performance information, including potential limitations. For example, USDA’s Inspector General identified the need to develop effective performance measures as a management challenge facing the agency and raised concerns about the accuracy of some performance information. But, USDA’s performance plans and report do not address this issue with regard to two of its APGs.officials in USDA’s Office of Budget and Program Analysis, and they acknowledged their agency could improve its public reporting on the quality of its performance information for APGs. We shared our analysis with USDA’s performance report for fiscal year 2014 contained an explanation on how the agency ensures the quality of its performance information overall, which is reproduced below (see text box). This helps external audiences understand that USDA’s methodology for collecting performance information has been vetted by scientists and policymakers. U.S. Department of Agriculture’s Statement on Performance Information Quality–2014 Annual Performance Report The data used by the Department to measure performance are collected using a standardized methodology. This methodology has been vetted by federally employed scientists and policymakers, and, ultimately, the Under Secretaries of the respective mission areas. All attest to the completeness, reliability, and quality of the data. The Department of Defense (DOD) Highlighted Progress for All APGs, but Did Not Explain How Performance Information Quality Is Ensured DOD included performance discussions for all of its APGs and stated whether the agency had met its interim or final APG targets in its performance reports for fiscal years 2013 and 2014; however, it did not address the performance information quality requirements for each APG, as shown in table 2. While DOD’s fiscal year 2013 and 2014 performance reports do not describe agency-wide guidance for ensuring performance information quality, we found an explanation of DOD’s guidance regarding performance information quality in DOD’s “2014 Performance Plan Update.” DOD officials said this document was intended to serve as the agency’s performance plan for fiscal year 2015. As reflected in table 2, the document provided a description of DOD’s APGs for fiscal years 2014 and 2015 and contained a brief statement addressing agency-wide data verification and validation practices. It states that, “at the beginning of the fiscal year, goal leaders provide action plans and verification and validation forms on each performance goal listed in the .” Officials in the Office of the Deputy Chief Management Officer said that DOD’s fiscal year 2014 performance plan update constituted their agency’s fiscal year 2015 performance plan, although this purpose is not clearly explained in the document. DOD officials told us their agency would publish its 2016 performance plan as part of its agency strategic plan, which was intended for publication by the end of summer 2015. The Secretary of Defense signed this plan covering fiscal years 2015-2018, which is dated July 31, 2015. Our review of the plan indicated that it described DOD’s performance management process. However, it did not explain how DOD will address all of the performance information quality requirements for each of the agency’s APGs. In addition, the plan did not make clear which sections of the plan were intended to address the requirement to publish a performance plan establishing performance goals for 2016. Department of Defense, Annual Energy Management Report: Fiscal Year 2014 (May 2015). related to the GPRAMA requirements, such as on their verification and validation processes. Improving the public reporting of performance information quality is important because DOD’s Reform the DOD Acquisition Process and DOD Financial Statement Audit Readiness APGs address areas that we have identified as high risk. For the acquisition APG, we reported in 2014 that DOD expects to invest $1.5 trillion in its portfolio of major defense acquisition programs, making it particularly important that DOD explains how it is ensuring the quality of performance information for this APG. Similarly, given that DOD is responsible for more than half of the federal government’s discretionary spending, we reported that it is particularly important DOD has accurate, timely, and useful financial information. The reliability of DOD’s financial information and ability to maintain effective accountability for its resources will be increasingly important to the federal government’s ability to make sound resource-allocation decisions. The Department of Homeland Security (DHS) Addressed GPRAMA Requirements in Explaining How Performance Information Quality Is Ensured for All APGs DHS presented information about performance information quality for all three of its APGs in its performance plans and reports, which included detailed discussion for 10 of the 14 performance measures used to measure progress on these APGs. Specifically, DHS published an appendix to its performance plans and reports with detailed performance information quality discussion for these measures. For each measure, DHS’s appendix describes the related program, the scope of the data, the source and collection methodology for the data, and an assessment of data reliability. For example, for the number of convicted criminal aliens that Immigration and Customs Enforcement removes from the country for the Enforce and Administer Our Immigration Laws APG, DHS addresses the requirements for explaining verification and validation. It explains how headquarters staff looks for unusual patterns in data field offices have entered into a database tracking removals. DHS also states it conducts additional checks by cross-referencing data on removals reported by detention facilities and field offices, and says a statistical tracking unit does further checks. Related to the intended use of performance information, DHS explains for each performance measure discussed in this appendix how it uses the measure for decision making. For example, DHS explains that its measure of the number of convicted criminal aliens removed from the country reflects the “full impact” of its program activities in this area. This helps a reader understand that DHS will need a high level of accuracy to ensure that its programs are achieving its goals for this area. We found examples of DHS acknowledging potential limitations for some APG performance measures. For example, DHS acknowledged that the average number of days to process inquiries from individuals experiencing difficulties with travel screening for its Aviation Security APG does not include the time DHS is waiting for the traveler to submit all required documents. This helps a reader understand that the performance measure may not reflect the total number of days it takes to resolve issues impeding an individual from traveling. DHS addresses the final requirement–how the agency will compensate for limitations to reach the required level of accuracy, if needed–by stating that each APG performance measure discussed in the appendix is reliable. Further, DHS also provided more specific explanation of corrective actions for some performance measures. For the Ensure Resilience to Disasters APG, DHS acknowledges that there is some variation in how states and territories assess their capabilities for dealing with disasters. But, it also explains that federal officials provide technical assistance and review the submissions from state and territorial officials to ensure that they align with guidance the Federal Emergency Management Administration provides to states and territories for how to assess their capabilities. In addition, DHS’s performance plans and reports explained how the agency ensures the quality of its performance information overall. DHS states it has an agency-wide performance management framework, which includes a process for verifying and validating its performance information. For example, DHS explains that one of the steps it takes is to have an independent review team assess the completeness and reliability of its performance measurement data. By addressing the five GPRAMA requirements for its APGs, DHS’s performance plans and reports helped external audiences better understand how it ensures the accuracy and reliability of performance information for the agency’s highest priority performance goals. The Department of the Interior (Interior) Described How Performance Information Quality is Ensured Overall, but Did Not Address GPRAMA Requirements for its APGs Interior’s performance plan and report covering fiscal years 2014 through 2016 provided an overall explanation of how it verified and validated its performance information, which is reproduced in figure 1. Interior’s plan and report states that it requires component agencies to have verification and validation processes, and referred to a more detailed document on the website of Interior’s Office of Policy, Management, and Budget “Data Verification and Validation Standards.” These standards provide direction to component agencies on how to verify and validate performance information and other aspects of performance information quality. For example, the standards explain that component agencies should document data sources, describe the accuracy limits of data, and identify data limitations. Interior shared these standards in June 2015 with other agencies participating in a Performance Improvement Council cross- agency working group on data quality. Interior’s performance plans and reports did not explain how performance information quality was ensured for its individual APGs. As shown above, Interior’s statement on verification and validation is written at a high level and does not explain the specific steps component agencies took to ensure that performance information for each APG was accurate and reliable. Interior’s Deputy Performance Improvement Officer noted that the performance plans and reports discussed the agency’s performance in mission areas that relate to the APGs, and thereby provided contextual information that would allow the public to understand the quality of performance information for these APGs. Thus, they provided information on performance targets and past performance for some of the performance measures related to APGs. The available contextual information in the performance plans and reports did not address all of the GPRAMA performance information quality requirements for each APG. For example, related to its Water Conservation APG, which aims to increase the available water supply in the western states, Interior identifies the number of people and farmers the Bureau of Reclamation delivers water to, and reports on the acre feet of water conservation capability enabled through Reclamation’s programs. While this helps external audiences understand the importance of this APG and the related mission area, Interior does not explain how it is ensuring that it is accurately measuring the water conservation capability enabled through Reclamation’s programs. Interior explains that operating regulations require inspection of leases which produce high volumes of oil or natural gas and those leases that have a history of noncompliance at least once a year. These inspections help ensure that hydrocarbon production on federally managed lands are properly accounted for and results in accurate royalty payments to the public and Indian owners of such minerals. federal leases. Also, this revenue is one of the federal government’s largest nontax source of revenue. The Department of Labor (Labor) Described the Overall Quality of its Performance Information, but Not on How Quality Is Ensured for its Individual APGs Labor’s performance reports contained limited discussion of performance information quality for its APGs, as table 5 shows. The reports referred readers to its Summary of Performance and Financial Information, which includes an attestation statement from the Secretary of Labor as to the reliability and completeness of the agency’s performance information (we reproduce these statements in the text boxes below). These statements provide the reader with the Secretary’s assurance of the quality of Labor’s performance information. However, these statements do not describe what practices are in place to ensure that the agency is using accurate and reliable performance information to measure and report on progress for its individual APGs. Department of Labor’s Data Completeness and Reliability Statement–Fiscal Year 2014 Annual Performance Report The Fiscal Year 2014 Summary of Performance and Financial Information includes an assessment by the Secretary of the reliability and completeness of DOL (Labor’s) performance data reported under the GPRAMA. The Department satisfies this requirement with Agency Head-level attestations that the data do not contain significant limitations that can lead to inaccurate assessments of results. The Secretary of Labor’s Data Quality Attestation Statement–Fiscal Year 2014 Summary of Performance and Financial Information Secretary’s Attestation Statement I attest that the summarized financial and performance data included in this document as well as the data in the Agency Financial Report and the Annual Performance Report are complete and reliable in accordance with Federal requirements. When we shared our analysis with Labor officials, they said that they were not fully aware of the performance information quality requirements for APGs. However, they emphasized that their agency places considerable emphasis on ensuring the quality of its performance information, and on conducting program evaluations to assess the effectiveness of its programs. Labor officials referred us to other agency publications–The Department of Labor FY 2014-2018 Strategic Plan and The Department of Labor FY 2016 Congressional Budget Justification–for explanation to the public of the activities they said their agency was taking to ensure the quality of its performance information. We confirmed that these publications do describe the agency’s research and evaluation agenda. For example, Labor states in its strategic plan that it is committed to improving the quality of performance information by conducting future evaluations to ensure the outcome data it reports are accurate. While Labor’s strategic plan does not explain the extent to which these data quality studies will focus on issues related to its APGs, Labor does provide valuable information and important context on how it plans to ensure the quality of its performance information. Nevertheless, GPRAMA’s performance information quality requirements are important because our previous work on fragmentation, overlap, and duplication shows that Labor needs to report more transparently on program performance for its veterans’ employment and training programs. This relates to one of Labor’s APGs on improving employment outcomes for veterans. Specifically, we reported in 2012 that Labor provided Congress with an annual veterans’ program report that provided certain performance information, such as the number of disabled and recently separated veterans who received intensive services. But, we found that Labor was not reporting these results relative to the performance goals it had set. We recommended in our prior work that Labor report both performance goals and associated performance outcomes for its veterans’ employment and training programs. Labor agreed with our recommendation and has made some progress in addressing it. For example, Labor has reported on how the results achieved for the performance measure it uses to measure progress on one of its APGs (percent of veterans receiving intensive services served by Disabled Veterans Outreach Program specialists) compare to the targets it set. Labor provided this information for this performance measure in both its performance report and on Performance.gov. The National Aeronautics and Space Administration (NASA) Described Its Overall Approach for Ensuring Performance Information Quality, but Does Not Explain How Performance Information Quality for its APGs Is Ensured NASA’s performance plans and reports characterize the agency as a performance-based organization committed to managing toward specific, measurable goals, and to using performance information to continually improve operations. As shown in table 6, these performance plans and reports explain how the agency ensures the quality of its performance information overall. For example, NASA stated in its performance plans and reports that it held internal reviews for its projects, determined technology readiness levels, and required mission directorates and mission support offices to submit evidence supporting all performance measure ratings.such as scientific review committees and aeronautics technical evaluation bodies, to help it validate program performance. These statements provide valuable insight into how NASA measures its performance and uses evidence. Further, NASA stated that it used external entities, However, NASA’s performance plans and reports did not explain how the agency ensured the quality of performance information for individual APGs. NASA presented concise summaries of each APG, progress updates, and next steps. However, there was little explanation provided for external audiences which described how NASA took the approach it has outlined for ensuring the overall quality of its performance information, and applied this approach to individual APGs. In our discussions with NASA officials, they emphasized that they do collect information related to all of the GPRAMA requirements. To illustrate this, NASA officials demonstrated their internal Performance Measure Manager system to us in April 2015. NASA officials told us that the system functions as a warehouse for agency performance information and they upload information from the system to Performance.gov for quarterly APG updates and to help develop their performance plans and reports. They showed us that this system collects information on the quality of performance information for a range of performance measures, including for APGs. For example, the internal database has a field for verification and validation materials for the James Webb Space Telescope APG, and identifies a data limitation for it. However, NASA does not publicly report all of the information the system collects on how it ensures the quality of performance information for its APGs. NASA officials expressed concern about how well the GPRAMA performance information quality requirements can be applied to their agency’s performance reporting. NASA officials said that they use what GPRAMA and Office of Management and Budget Circular A-11 refer to as an alternative form of performance measurement that, among other things, allows an agency to use milestones for completing major deliverables for the APG instead of performance measures. They further explained that they use numerous milestones to measure progress on their APGs. They added that NASA only reports on key quarterly milestones in its performance plans and reports and on Performance.gov. Unlike other agencies, they noted they often do not have quantitative data sets for their performance information. However, the GPRAMA performance information quality requirements apply to all APGs, even if the agency is using milestones. As we noted in our 2015 High Risk update, NASA plans to invest billions of dollars in the coming years to explore space, understand Earth’s environment, and conduct aeronautics research. We designated NASA’s acquisition management as high risk in 1990 in view of NASA’s history of persistent cost growth and schedule slippage in the majority of its major projects. Going forward, we noted in our February 2015 high risk update that it will be critical for NASA to ensure adequate and ongoing assessments of risks related to two of its APGs for developing new systems for exploring deep space and the James Webb Space Telescope. However, as our review shows, NASA has not explained to external audiences how it is ensuring the quality of performance information for these APGs related to these high-risk areas. Without such information, it will be more difficult for Congress and the public to understand whether NASA is effectively measuring progress toward these APGs, and whether the billions of dollars being spent to accomplish these important efforts are being used effectively. In 2015, OMB and the Performance Improvement Council (PIC) established the Data Quality Cross-Agency Working Group. This group met for the first time in February 2015; and four other meetings were held in April, May, June, and July 2015. As of June 2015, PIC staff reported that a total of 12 agencies were participating, which is more than half of the agencies with APGs. Three of the six agencies we selected for review–the Departments of Defense and Homeland Security (DHS) and NASA–are participating. DHS and NASA officials told us that they have made or plan to make presentations at these meetings on their agencies’ performance information quality processes. In addition, Interior’s Deputy Performance Improvement Officer shared his agency’s verification and validation standards with the group. An additional nine agencies–the Departments of Commerce, Education, Health and Human Services, Justice, Treasury, and Veterans Affairs, and the Environmental Protection Agency, Small Business Administration, and Social Security Administration–are also participating in the group. According to meeting notes for the May 2015 meeting provided by OMB and PIC staff, the group had identified several goals: improve the reliability and quality of performance information and of the reporting process; set standards and develop consistency across agencies; and highlight good performance measures and accurate and appropriate performance information. The May 2015 meeting notes state the group’s end product will be to identify solutions agencies have used to solve a data quality problem. In June 2015, the PIC’s Executive Director explained that while the group is still working on defining this end product, it wants to develop a collection of useful and leading practices that can be shared with agency officials. The PIC’s Executive Director and her staff also noted that this end product could include providing recommendations to OMB on changes that could be made to the A-11 guidance on how to address GPRAMA’s performance information quality requirements. OMB staff also indicated to us that they would like to get input from the group on additional changes that could be made to A-11. PIC staff told us that the participating agencies met with OMB staff in July 2015 to discuss additional changes that should be made to A-11. For more than two decades, agencies have been required to publicly report on the quality of their performance information in annual performance plans. More recently, agencies have also been required to report on Performance.gov on how they will ensure the accuracy and reliability of the performance information used to measure progress on each of their highest priority performance goals, the APGs. However, insufficient progress has been made. While OMB for several years has directed agencies to discuss the quality of APG performance information in their annual performance plans and reports, the selected agencies’ plans and reports often did not. OMB recently changed its guidance to require agencies to provide this information for publication on Performance.gov. The next key step is to build upon this recent guidance to implement the change and make this important information readily accessible to the public and Congress. This overall lack of transparency means that members of Congress, citizens, journalists, and researchers seeking information about agency performance related to priority goals have to search in multiple places, and often end up finding no explanation of the quality of performance information for APGs. For agencies to maintain the confidence of Congress and the public that they are indeed achieving their priority goals for the challenging and complex results they seek to achieve, agencies will need to provide more transparent explanations of how they are ensuring the accuracy and reliability of performance information for their APGs. More broadly, our review shows that five agencies continue to provide limited information in their annual performance plans and reports concerning the quality of performance information for their APGs. The same is true for all six agencies on Performance.gov. In some cases, the needed context and information may be available within the agency for the agency’s use. However, this information is not consistently provided to external audiences. The Performance Improvement Council’s (PIC) Data Quality Cross-Agency Working Group provides a potential forum for agencies to collaborate and share information on this topic, and the group is defining its intended end product. Given the shortcomings our review identified at the majority of the six agencies reviewed, the working group could help agencies identify practices that will help them more clearly explain to Congress and the public how they are ensuring that the performance information for their highest priority performance goals is accurate and reliable. OMB could also work with this PIC working group to continue updating its guidance to agencies to ensure that this information is readily accessible on Performance.gov. To improve the public reporting about how agencies are ensuring the quality of performance information used to measure progress towards their priority goals, we recommend the following actions: The Secretaries of Agriculture, Defense, Homeland Security, Interior, and Labor, and the Administrator of NASA should more fully address GPRAMA requirements and OMB guidance by working with OMB to describe on Performance.gov how they are ensuring the quality of performance information used to measure progress towards their APGs. The Secretaries of Agriculture, Defense, Interior, and Labor, and the Administrator of NASA should more fully address GPRAMA requirements and OMB guidance by describing in their agencies’ annual performance plans and reports how they are ensuring the quality of performance information used to measure progress towards their APGs. To help participating agencies improve their public reporting, we recommend that the Director of OMB, working with the PIC Executive Director, should: Identify additional changes that need to be made in OMB’s guidance to agencies related to ensuring the quality of performance information for APGs on Performance.gov. Identify practices participating agencies can use to improve their public reporting in their performance plans and reports of how they are ensuring the quality of performance information used to measure progress towards APGs. We provided a draft of this report to the Director of OMB and the Secretaries of Agriculture, Defense, Homeland Security, Interior, and Labor, and the Administrator of NASA. The Department of the Interior and NASA concurred with the recommendations directed to them, and discussed specific actions they plan to take to address these recommendations. Interior and NASA’s written responses are reproduced in appendixes II and III. In its response, NASA also shared a concern about how we portrayed its high-risk reporting in the draft report. It stated that our draft report suggested that the select milestones identified as part of its performance reporting are the sole mechanisms NASA uses to assess risks and measure progress towards launching the James Webb Space Telescope, and developing new systems for human exploration of deep space. NASA further stated that to comply with reporting requirements related to APGs, it has opted to provide information on key quarterly milestones that the public can easily understand. To address NASA’s concern, we revised the report to recognize that NASA officials told us they use numerous milestones to measure progress on their APGs. NASA also said that it only reports on key quarterly milestones in its performance plans and reports, and on Performance.gov. NASA also provided technical clarifications, which we incorporated as appropriate. The Department of Homeland Security (DHS) also concurred with the recommendation directed to it. However, DHS stated that it has already taken action to implement our recommendation to work with OMB to describe on Performance.gov how DHS is ensuring the quality of performance information used to measure progress towards its APGs. Thus, DHS regards the recommendation as resolved and closed. DHS stated that on July 1, 2015, agency officials in its Office of Program Analysis and Evaluation provided OMB with several specific suggestions to consider as possible enhancements to the internal system that OMB uses to gather agency data for public posting on Performance.gov. This will allow agencies to include more comprehensive data quality information on this public website. DHS’s efforts are an important step toward addressing our recommendation. However, as our review found, and DHS recognizes in its response letter, more will need to be done to make DHS’s explanations of performance information quality for its APGs accessible to external audiences on Performance.gov. For example, in our report we noted that OMB’s updated A-11 guidance in June 2015 gives agencies the option of providing a hyperlink from Performance.gov to an appendix in their performance reports containing their performance information quality discussion, which DHS could do. We will continue to monitor DHS’s efforts to work with OMB to fully implement the recommendation. DHS’s written comments are reproduced in appendix IV. The Department of Defense (DOD) partially concurred with the recommendations directed to it. DOD stated that it has ongoing actions to improve the quality of performance information, and to make better use of that information in management. However, DOD stated that it did not agree that making discussion of the process of managing the quality of performance information a part of either the agency strategic plan or annual reporting has any major management value. We disagree. First, GPRAMA does not require, nor did we recommend, that DOD provide information on the quality of performance information for its agency priority goals in its agency strategic plan. Second, GPRAMA does require that this information be provided in agency performance plans and reports and on Performance.gov. We continue to believe it is important for DOD to fully address these GPRAMA requirements because, as described in our report, two of DOD’s APGs address areas we have identified as high risk. Also, DOD is responsible for more than half of the federal government’s discretionary spending. DOD’s written comments are reproduced in appendix V. The Departments of Agriculture (USDA) and Labor did not comment on the recommendations directed to them. However, they both discussed specific actions they plan to take to improve the quality of their publicly- reported performance information for their agency priority goals. In comments relayed to us in an August 14, 2015, e-mail from the Associate Director of USDA’s Office of Budget and Program Analysis, who is also the agency’s Performance Improvement Officer (PIO), he stated the agency would ensure that a description of the quality of performance information be added for each performance measure included in its APGs for fiscal years 2016 and 2017. He also stated that USDA will work with OMB to put this information on Performance.gov or in its annual performance plan and report with a reference to that information on Performance.gov. He also stated a reference to this information would be provided in USDA’s annual performance plan. He also provided us with a technical clarification, which we incorporated. In its response, Labor raised a concern about statements in our draft report regarding information on several of its programs that serve veterans, which were drawn from our prior work. Labor stated that the report incorrectly asserts that it does not report on the number of veterans receiving intensive services relative to performance goals. Rather, Labor stated that it has used the Veterans’ Employment and Training Services measure for the percent of veterans being served by the Disabled Veterans’ Outreach Program as an APG for 4 years, and has included how results relate to performance goals. Additionally, Labor asserted that outcomes for the Veterans Workforce Investment Program have been included in the annual report to Congress in fiscal years 2013 and 2014. We revised the report to reflect Labor’s updated actions as appropriate. Labor’s written comments are reproduced in appendix VI. In an August 26, 2015, e-mail from OMB’s liaison to GAO, OMB did not comment on the recommendations, but provided technical clarifications, which we incorporated as appropriate. We are sending copies of this report to the Director of OMB and the heads of the agencies we reviewed as well as appropriate congressional committees and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or mihmj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. J. Christopher Mihm, (202) 512-6806 or mihmj@gao.gov. In addition to the contact named above, Sarah E. Veale, Assistant Director, and Michael O’Neill, Analyst-in-Charge, supervised the development of this report. Virginia Chanley, Emily Christoff, Erik Kjeldgaard, and A.J. Stephens made significant contributions to all aspects of this report. Deirdre Duffy and Robert Robinson provided additional assistance. | Federal agencies have not always clearly and transparently explained to Congress and the public how they ensure the quality of their performance information. GPRAMA requires agencies to publicly explain how they ensure the accuracy and reliability of their performance information used to assess progress for their APGs. This is one of a series of GAO reports examining the implementation of GPRAMA, as required by the act. This report assesses how well selected agencies publicly reported on the quality of performance information used to measure progress on APGs. GAO selected six agencies–the Departments of Agriculture, Defense, Interior, and Labor, and NASA and DHS– based on GAO's 2013 federal managers survey on their agency's use of performance information. GAO reviewed information concerning these agencies' APGs published on Performance.gov and in their annual performance plans and reports. The six agencies GAO reviewed generally did not publicly report on how they ensured the accuracy and reliability of performance information used to measure progress on their highest priority performance goals, referred to as agency priority goals (APGs). The GPRA Modernization Act of 2010 (GPRAMA) requires agencies to identify the following when publicly reporting on their APGs: 1) how performance information was verified and validated; 2) data sources; 3) level of accuracy required for intended use; 4) any limitations at the required level of accuracy; and 5) how the agency will compensate for such limitations (if needed) to reach the required level of accuracy. GPRAMA requires agencies to provide this information to the Office of Management and Budget (OMB) for publication on Performance.gov. GPRAMA also directs agencies to provide this information for performance goals, which include APGs, in their annual performance plans and reports. While all six agencies described how they ensured the quality of their performance information overall, GAO found discussions about performance information quality addressing all five GPRAMA requirements in only the Department of Homeland Security's (DHS) performance plans and reports. Source: GAO analysis of selected agencies' performance plans and reports. | GAO-15-788 OMB and the Performance Improvement Council (PIC)–a cross-agency council of agency performance improvement officers–established the Data Quality Cross-Agency Working Group in February 2015. The group has identified several goals, such as improving the reliability and quality of performance information, and could serve as a vehicle for disseminating good practices in public reporting on data quality. GAO recommends that all six of the agencies work with OMB to describe on Performance.gov how they are ensuring the quality of their APGs's performance information, and that all agencies, except for DHS, also describe this information in their annual performance plans and reports. GAO also recommends that OMB, working with the PIC, focus on ways the PIC's data quality working group can improve public reporting for APGs. OMB did not comment on the recommendations, but the six agencies generally concurred or identified actions they planned to take to implement them. |
To help protect against threats to federal systems, FISMA sets forth a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. Its framework creates a cycle of risk management activities necessary for an effective security program. It is also intended to provide a mechanism for improved oversight of federal agency information security programs. In order to ensure the implementation of this framework, FISMA assigns specific responsibilities to (1) OMB, to develop and oversee the implementation of policies, principles, standards, and guidelines on information security; to report, at least annually, on agency compliance with the act; and to approve or disapprove, agency information security programs; (2) agency heads, to provide information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of information collected or maintained by or on behalf of the agency; (3) agency heads and chief information officers, to develop, document, and implement an agencywide information security program, among others; (4) inspectors general, to conduct annual independent evaluations of agency efforts to effectively implement information security; and (5) NIST to provide standards and guidance to agencies on information security. FISMA also assigns responsibility to OMB for ensuring the operation of a federal information security incident center. The required functions of this center are performed by the Department of Homeland Security’s (DHS) United States Computer Emergency Readiness Team (US-CERT), which was established to aggregate and disseminate cybersecurity information to improve warning and response to incidents, increase coordination of response information, reduce vulnerabilities, and enhance prevention and protection. In addition, the act requires each agency to report annually to OMB, selected congressional committees, and the Comptroller General on the adequacy of its information security policies, procedures, practices, and compliance with requirements. FISMA also requires OMB to report annually to Congress by March 1. See appendix II for additional information on the responsibilities of each entity. Federal agencies’ information and information systems remain at risk. This risk is illustrated in part by the rising numbers of incidents reported by federal agencies in fiscal year 2010. At the same time, weaknesses in their information security policies and practices compromised their efforts to protect against threats. Furthermore, our work and reviews by inspectors general highlight information security control deficiencies at agencies that expose information and information systems supporting federal operations and assets to elevated risk of unauthorized use, disclosure, modification, and disruption. Accordingly, we and agency inspectors general have made hundreds of recommendations to agencies in fiscal years 2010 and 2011 to address these security control deficiencies. Federal agencies have reported increasing numbers of security incidents that placed sensitive information at risk. When incidents occur, agencies are to notify the federal information security incident center—US-CERT. Over the past 5 years, the number of incidents reported by federal agencies to US-CERT has increased from 5,503 incidents in fiscal year 2006 to 41,776 incidents in fiscal year 2010, an increase of over 650 percent (see fig.1). Agencies also reported the following types of incidents and events based on US-CERT-defined categories: Unauthorized access: Gaining logical or physical access to a federal agency’s network, system, application, data, or other resource without permission. Denial of service: Preventing or impairing the normal authorized functionality of networks, systems, or applications by exhausting resources. This activity includes being the victim of or participating in a denial of service attack. Malicious code: Installing malicious software (e.g., virus, worm, Trojan horse, or other code-based malicious entity) that infects an operating system or application. Agencies are not required to report malicious logic that has been successfully quarantined by antivirus software. Improper usage: Violating acceptable computing use policies. Scans/probes/attempted access: Accessing or identifying a federal agency computer, open ports, protocols, service, or any combination of these for later exploit. This activity does not directly result in a compromise or denial of service. Unconfirmed incidents under investigation: Investigating unconfirmed incidents that are potentially malicious or anomalous activity deemed by the reporting entity to warrant further review. According to DHS officials, these incidents include those that US- CERT detects through its intrusion detection system, supplemented by agency reports for investigation. As indicated in figure 2, the four most prevalent types of incidents and events reported to US-CERT during fiscal year 2010 were: (1) malicious code; (2) unconfirmed incidents under investigation; (3) improper usage; and (4) unauthorized access. Reported attacks and unintentional incidents involving federal systems and critical infrastructure systems demonstrate that a serious attack could be devastating. Agencies have experienced a wide range of incidents involving data loss or theft, computer intrusions, and privacy breaches, underscoring the need for improved security practices. The following examples, included to reflect incidents reported in 2010 and 2011, illustrate that a broad array of information and assets remain at risk. An employee at federal financial institution downloaded unauthorized accounting source code to a bank hard drive which he had previously reported as stolen. The institution’s internal security personnel are investigating and believe the bank employee may have shared the code with a student in another country. A well-known hacker group, according to an online news journal, was planning a cyber protest attack on a federal agency, using mobile phones and massive crowds of supporters as well as online supporters. This attack was intended to slow or stop traffic in and out of the agency and delay operations. A user on a department’s network was tricked by a carefully crafted e- mail to go to a website on the pretense that he had won a new car in a lottery he supposedly entered by answering some simple questions about his pets. Later, he found that several credit cards had been opened in his name and large amounts of pet supplies had been ordered without his knowledge. A contractor working for a federal agency sent an unencrypted e-mail from his workstation to his personal e-mail account. This was detected by a monitoring tool at the agency and an immediate investigation was initiated. Several agency personnel had their personal information sent in an unencrypted e-mail to an unauthorized account. Network security personnel at a federal institution noted that a large number of network probes on their system originated from an underground hacking group. The institution contacted US-CERT and asked that it contact the service provider to request that the IP address be blocked so that it could no longer probe the institution. A federal agency’s website was reportedly attacked by a hacker group. Initial analysis determined the hack took place via a web implementation of Java. The attackers have not completely taken down the web server; however, considerable peaks of traffic have been detected. Our audits have identified information security deficiencies in both financial and nonfinancial systems, including vulnerabilities in federal systems. We have made hundreds of recommendations to agencies in fiscal years 2010 and 2011 to address these security control deficiencies. However, most of these recommendations have not yet been fully implemented. The following examples, reported in 2010 and 2011, describe the risks we found at federal agencies, our recommendations, and the actions the agencies plan to take. In March 2011, we reported that the Internal Revenue Service had made progress in correcting previously reported information security weaknesses, but a significant number of them remained unresolved or unmitigated. For example, the agency did not sufficiently (1) restrict users’ access to databases to only the access needed to perform their jobs; (2) secure the system it uses to support and manage its computer access request, approval, and review processes; (3) update database software residing on servers that support its general ledger system; and (4) enable certain auditing features on databases supporting financial and tax processing systems. An underlying reason for these weaknesses was that the Internal Revenue Service had not yet fully implemented required components of its comprehensive information security program. As a result, financial and taxpayer information remain unnecessarily vulnerable to insider threats and at increased risk of unauthorized disclosure, modification, or destruction; financial data are at increased risk of errors that result in misstatement; and the agency’s management decisions may be based on unreliable or inaccurate financial information. We recommended that the Internal Revenue Service take 32 specific actions for correcting newly identified control weaknesses, and it agreed to develop a detailed corrective action plan that addresses them. In November 2010, we reported that the Federal Deposit Insurance Corporation did not sufficiently implement access and other controls intended to protect the confidentiality, integrity, and availability of its financial systems and information. For example, it did not always (1) sufficiently restrict user access to systems; (2) ensure strong system boundaries; (3) consistently enforce strong controls for identifying and authenticating users; (4) encrypt sensitive information; or (5) audit and monitor security-relevant events. In addition, the Federal Deposit Insurance Corporation did not have policies, procedures, and controls in place to ensure the appropriate segregation of incompatible duties, adequately manage the configuration of its financial information systems, and update contingency plans. An underlying reason for these weaknesses was that the corporation did not always fully implement several information security program activities, such as effectively developing, documenting, and implementing security policies. As a result, it faced an elevated risk of the misuse of federal assets, unauthorized modification or destruction of financial information, inappropriate disclosure of other sensitive information, and disruption of computer operations. Accordingly, we recommended that the corporation fully implement several key activities to enhance its information security program. The Federal Deposit Insurance Corporation generally agreed with our recommendations and stated that it planned to address the identified weaknesses. In October 2010, we reported that the National Archives and Records Administration had not effectively implemented information security controls to sufficiently protect the confidentiality, integrity, and availability of the information and systems that support its mission. For example, the agency did not always (1) protect the boundaries of its networks by ensuring that all incoming traffic was inspected by a firewall; (2) enforce strong policies for identifying and authenticating users by requiring the use of complex passwords; and (3) limit users’ access to systems to what was required for them to perform their official duties. The identified weaknesses existed, in part, because the National Archives and Records Administration had not fully implemented key elements of its information security program. As a result, sensitive information, such as records containing personally identifiable information, was at increased and unnecessary risk of unauthorized access, disclosure, modification, or loss. We recommended that it take 224 specific actions to implement elements of its security program and enhance access and other information security controls over its systems. The Archivist of the United States generally concurred with our recommendations, and agreed to provide semiannual updates on the agency’s progress to enhance access controls and address the identified weaknesses. In addition, reviews at the 24 major federal agencies continue to highlight deficiencies in their implementation of information security policies and procedures. In fiscal year 2010, in their performance and accountability reports and annual financial reports, 19 of 24 agencies indicated that inadequate information security controls were either material weaknesses or significant deficiencies (see fig. 3) for financial reporting purposes. Specifically, 8 agencies identified material weaknesses, increasing from 6 agencies in fiscal year 2009, while 11 reported significant deficiencies, decreasing from 15 agencies in fiscal year 2009. In fiscal year 2010 annual reports required under 31 U.S.C. § 3512 (commonly referred to as the Federal Managers’ Financial Integrity Act of 1982), 7 of the 24 agencies identified weaknesses in information security. In addition, 23 of 24 inspectors general cited information security as a “major management challenge” for their agency, reflecting an increase from fiscal year 2009, when 20 of 24 inspectors general cited information security as a challenge. Our, agency, and inspectors general assessments of information security controls during fiscal year 2010 revealed that most major federal agencies had weaknesses in each of the five major categories of information system controls: (1) access controls, which ensure that only authorized individuals can read, alter, or delete data; (2) configuration management controls, which provide assurance that only authorized software programs are implemented; (3) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection; (4) continuity of operations planning, which helps avoid significant disruptions in computer-dependent operations; and (5) agencywide information security programs, which provide a framework for ensuring that risks are understood and that effective controls are selected and implemented. All 24 agencies had vulnerabilities in access control, configuration management, and security management. Deficiencies in segregation of duties and contingency planning, while not reported for all of these agencies, were prevalent, as figure 4 demonstrates. Agencies use electronic and physical controls to limit, prevent, or detect inappropriate access to computer resources (data, equipment, and facilities), thereby protecting them from unauthorized use, modification, disclosure, and loss. Access controls involve the six critical elements described in table 1. All 24 major federal agencies had access control weaknesses during fiscal year 2010. For example, 18 agencies experienced problems with identifying and authenticating information system users, with at least 7 of these agencies allowing weak authentication practices that could increase vulnerability to unauthorized use of their information systems. Nineteen agencies had weaknesses in controls for authorizing access in such areas as management of inactive accounts and ensuring that only those with a legitimate need had access to sensitive accounts. In addition, 16 agencies did not adequately monitor networks for suspicious activities or report security incidents that had been detected. Without adequate access controls in place, agencies cannot ensure that their information resources are protected from intentional or unintentional harm. Configuration management controls ensure that only authorized and fully tested software is placed in operation, software and hardware are updated, information systems are monitored, patches are applied to these systems to protect against known vulnerabilities, and emergency changes are documented and approved. These controls, which limit and monitor access to powerful programs and sensitive files associated with computer operations, are important in providing reasonable assurance that access controls and the operations of systems and networks are not compromised. To protect against known vulnerabilities, effective procedures must be in place, appropriate software installed, and patches updated promptly. Up-to-date patch installation helps mitigate flaws in software code that could be exploited to cause significant damage and enable malicious individuals to read, modify, or delete sensitive information or disrupt operations. While the 24 major agencies experienced problems with implementing configuration management, no weaknesses were reported in one area: handling emergency changes to system and network configurations. Our and inspectors general assessments revealed weaknesses in other areas, however. Twenty-one agencies had problems with maintaining and adhering to configuration management policies, plans, and procedures, which could jeopardize their ability to manage their systems and networks effectively. Another area where many agencies experienced difficulty was the practice of maintaining current configuration information in a formal baseline. Nineteen agencies had only partially complied with their internal or with federal requirements for maintaining these baselines. In addition, 18 agencies had deficiencies in keeping software updated, such as not adequately managing patch installations. Without a consistent approach to testing, updating, and patching software, agencies increase their risk of exposing sensitive data to unauthorized and possibly undetected access. Segregation of duties refers to the policies, procedures, and organizational structure that help ensure that one individual cannot independently control all key aspects of a computer-related operation and thereby take unauthorized actions or gain unauthorized access to assets or records. Key steps to achieving proper segregation are ensuring that incompatible duties are separated and employees understand their responsibilities, and controlling personnel activities through formal operating procedures, supervision, and review. We and agency inspectors general identified 17 agencies that did not adequately segregate duties. Of these agencies, 14 had difficulty ensuring that key duties and responsibilities for authorizing, processing, recording, or reviewing transactions were appropriately separated. For example, 1 agency granted conflicting access to critical resources in its mainframe environment, and another improperly allowed contractors access to security functions. At least 6 of the agencies that did not adequately segregate duties failed to maintain sufficient control over personnel procedures, supervision, and review. At 1 agency, there was no effective way to identify how many contractors had access to and privileges within the network, and at least 3 agencies allowed individuals to inappropriately use accounts with elevated privileges or assume conflicting roles. Without adequate segregation of duties, agencies increase the risk that erroneous or fraudulent actions will occur, improper program changes will be implemented, and computer resources will be damaged or destroyed. In the event of an act of nature, fire, accident, sabotage, or other disruption, an essential element in preparing for the loss of operational capabilities is having an up-to-date, detailed, and fully tested continuity of operations plan. This plan should cover all key functions, including assessing an agency’s information technology and identifying resources, minimizing potential damage and interruption, developing and documenting the plan, and testing it and making necessary adjustments. If continuity of operations controls are faulty, even relatively minor interruptions can result in lost or incorrectly processed data, which can lead to financial losses, expensive recovery efforts, and inaccurate or incomplete mission-critical information. Our and agency inspectors general fiscal year 2010 reports show that 22 federal agencies had shortcomings in their plans for continuity of operations. Developing and implementing a comprehensive plan presented difficulties for at least 13 agencies for varying reasons. For example, 1 agency did not include key elements in some contingency plans or testing reports, such as identification of alternate processing facilities, restoration procedures, and data-sensitivity handling procedures, and officials at another agency were confused about their responsibilities for contingency and disaster recovery planning for certain classified systems. Additionally, tests of existing plans proved to be inadequate for at least 11 agencies. Until agencies address identified weaknesses in their continuity of operations plans and tests of these plans, they may not be able to recover systems in a successful and timely manner when service disruptions occur. An underlying cause for information security weaknesses identified at federal agencies is that they have not yet fully or effectively implemented an agencywide information security program. An agencywide security program, as required by FISMA, provides a framework for assessing and managing risk, including developing and implementing security policies and procedures, conducting security awareness training, monitoring the adequacy of the entity’s computer-related controls through security tests and evaluations, and implementing remedial actions as appropriate. Without a well-designed program, security controls may be inadequate; responsibilities may be unclear, misunderstood, and improperly implemented; and controls may be inconsistently applied. Such conditions may lead to insufficient protection of sensitive or critical resources. Of the 24 major agencies, none had fully or effectively implemented an agencywide information security program. To illustrate, 18 had shortcomings in the documentation of their security management programs, which establishes the framework and activities for assessing risk, developing and implementing effective security procedures, and monitoring the effectiveness of these procedures. In another example, 18 agencies did not adequately implement remedial actions to correct known vulnerabilities. Until agencies fully resolve identified deficiencies in their agencywide information security programs, the federal government will continue to face significant challenges in protecting its information systems and networks. We continue to identify information security as a governmentwide high-risk issue in our biennial reports to Congress, most recently in February 2011. Full and effective implementation of agencywide information security programs is necessary to ensure that federal data and systems will be adequately safeguarded to prevent disruption, unauthorized use, disclosure, and modification. OMB, executive branch agencies, and NIST have taken actions intended to improve the implementation of their FISMA-related security requirements, but much work remains. Beginning in fiscal year 2009, OMB instituted the use of a new online tool for agencies to report their information security posture on a recurring basis and, in fiscal year 2010, provided them with new and revised metrics for reporting such information. However, not all the metrics used to measure performance included performance targets. While agencies reported performance using these new and revised metrics, inspectors general continued to identify weaknesses in the processes agencies used to implement the requirements. As previously discussed, FISMA requires OMB to develop and oversee the implementation of policies, standards, and guidelines on information security at executive branch agencies and to annually report on agency compliance with FISMA to Congress no later than March 1 of each year. In fulfilling these and other requirements, OMB has taken a number of actions intended to meet its FISMA responsibilities and improve federal information security. These include: Launching a new security reporting tool—Cyberscope. In fiscal year 2010, OMB mandated that agencies use Cyberscope for submitting their information security data to OMB. Cyberscope is an interactive data collection tool that has the capability to receive data feeds on a recurring basis to assess the security posture of a federal agency’s information infrastructure. According to OMB, this tool will allow agencies to report security data on a more frequent basis. Beginning in 2011, agencies are required to report data on a monthly basis, rather than the previous quarterly basis. Developing new security metrics. In fiscal year 2010, OMB convened a joint task force that developed new security performance metrics that are intended to encourage agencies to focus on risk and improve information security. We previously recommended that OMB develop additional measures of effectiveness. According to OMB, the new security metrics are intended to provide “outcome-focused” metrics for federal agencies to assess the implementation of security capabilities, measure their effectiveness, and ascertain their impact on risk levels. The revised metrics included reporting on: Boundary protection—to report information on the status of agencies’ implementation of the Trusted Internet Connections initiative, such as the percentage of external connections or network capacity passing through a trusted Internet connection; or to report on agencies’ deployment of operational Einstein 2 sensors, such as the percentage of trusted Internet connections with operational Einstein 2 deployments. Remote access and telework—to report information on the methods allowed to remotely connect to agency network resources. Identity and access management—to report on the extent to which agencies have issued and implemented personal identity verification cards in accordance with Homeland Security Presidential Directive 12. Data protection—to report agencies’ use of encryption on portable computers, such as laptops. OMB has also acted to assign the operational aspects of several of its FISMA-related responsibilities to DHS. In July 2010, the Director of OMB and the Cybersecurity Coordinator issued a joint memorandum stating that DHS will exercise primary responsibility within the executive branch for the operational aspects of federal agency cybersecurity with respect to federal information systems that fall within the scope of FISMA. In carrying out this responsibility and the accompanying activities, DHS is to be subject to general OMB oversight in accordance with the provisions of FISMA. According to the memorandum, DHS responsibilities include but are not limited to overseeing the governmentwide and agency-specific implementation of and reporting on cybersecurity policies and guidance; overseeing and assisting governmentwide and agency-specific efforts to provide adequate, risk-based, and cost-effective cybersecurity; overseeing the agencies’ compliance with FISMA and developing analyses for OMB to assist in the development of the FISMA annual report; overseeing the agencies’ cybersecurity operations and incident response and providing appropriate assistance; and reviewing the agencies’ cybersecurity programs annually. In fiscal year 2011, DHS, as part of implementing its new operational information security responsibilities, held meetings with chief information officers and chief information security officers from the 24 major federal agencies. According to DHS officials, the meetings were aimed at allowing agency officials to discuss specific challenges they faced in addressing threats and vulnerabilities and assisting DHS with determining the capabilities needed to address persistent issues. Additionally, DHS launched “CyberStat” review sessions in January 2011 with the purpose of ensuring accountability and assisting the agencies in driving progress with key strategic enterprise cybersecurity capabilities. Data used in CyberStat sessions are based on information provided by agencies through CyberScope. According to both OMB and DHS officials, as of July 2011, DHS has held CyberStat sessions with seven agencies discussing various topics including continuous monitoring. In addition, OMB satisfied its FISMA requirement to report to the Congress no later than March 1, 2011, on agency compliance with FISMA. OMB transmitted its fiscal year 2010 report and highlighted achievements across the federal government that included, among others, a shift from periodic security reviews to automated mechanisms for continuously monitoring agency security controls, the use of NIST’s Risk Management Framework concepts, and the approval of the National Initiative for Cyber Education, which is intended to improve cybersecurity education through the establishment of education and training programs. The report also references efforts taken by the Office of Personnel Management to develop a cybersecurity competency model and review human resource strategies to help hire and retain cybersecurity experts to meet existing and future federal workforce needs. We have ongoing work in the area of cybersecurity human capital workforce planning activities. For fiscal year 2010, OMB enhanced the FISMA reporting process. FISMA requires that OMB report on agencies’ compliance with the act’s requirements. Each year, OMB provides instructions to federal agencies and their inspectors general for preparing their FISMA reports and then summarizes the information provided by the agencies and their inspectors general in its report to Congress. In its annual information security reporting instructions to agencies and their inspectors general, OMB expanded the number and type of security control areas covered under the reporting process. For the first time, OMB required agencies to provide information on their use of automated tools to manage, for example, information technology configurations and vulnerabilities. In addition, agencies were to provide information with regard to, among other things, security awareness training, configuration management, and incident management. We had previously recommended that OMB expand inspectors general reporting to address additional security program areas. Accordingly, for fiscal year 2010, OMB’s reporting instructions also identified additional areas for which inspectors general were to assess and report on agency performance; such areas included identity management and continuous monitoring. Even with these changes, continued improvements are needed. Specifically, as we previously reported, one attribute of a metric is that it should be meaningful. A meaningful metric should be clear, address organizational priorities, and have performance targets. OMB’s fiscal year 2010 reporting instructions included 31 metrics for chief information officers. While most chief information officer metrics were clearly defined and reflected agency priorities, all but one of the metrics did not include performance targets that would allow agencies to track progress over time. For example, one of the measures asks agencies to provide the mean time for incident detection, remediation, and recovery. While this defined metric addresses an organizational priority, it does not provide a target or threshold to monitor progress over time. Inspectors general were also asked to comment on various program areas, but the measures provided do not distinguish performance targets to determine levels of effective implementation. To illustrate, inspectors general are asked to report whether their agency’s security authorization program includes “categorizes information systems” as an “attribute” of the program. However, there is no specific target or measure to determine whether this would mean that a specific portion of systems had been properly categorized (e.g., all or half), or just systems in the inspectors general review. According to OMB officials, targets were not included since targets are set based on the Administration’s top cyber security priorities or by NIST standards and guidance. For example, in February 2011, OMB and DHS set several targets for implementing various Homeland Security Presidential Directive 12 requirements in their memorandum to federal agencies. While targets may be provided in various memorandums and guidance, agencies may still be unaware of the thresholds that are to be met as part of their annual report requirements. Further, without specific targets listed in annual reporting instructions and identified in annual FISMA reports, federal agencies and the Congress may not be able to properly gauge performance. While agencies reported on their information security programs using new and revised measures, they continued to have weaknesses in implementing security practices. In addition to categories used in fiscal year 2009 such as security awareness and specialized training, agencies also reported on their capability to automate the management of information system asset configurations and vulnerabilities. Inspectors general also reported agencies’ program performance using new measures for categories such as continuous monitoring, among others, and identified weaknesses in agencies programs’ both in new categories and those used in prior years. FISMA requires agencies to provide security awareness training to personnel, including contractors and other users of information systems that support agency operations and assets. This training should explain information security risks associated with their activities and their responsibilities in complying with agency policies and procedures designed to reduce these risks. In addition, agencies are required to provide appropriate information security training to personnel who have significant security responsibilities. For fiscal year 2010, OMB required agencies to report, among other things, (1) the number of agency users with log-in privileges who had been given security awareness training annually and (2) the number of agency users with significant security responsibilities who had been given specialized, role-based, security training annually. In fiscal year 2010, the 24 major agencies reported that 92 percent of users with log-in privileges had been given annual security awareness training, and that 88 percent of users with significant security responsibilities had received specialized training. However, while most of the major agencies reported a high percentage of users receiving awareness training, the number of agencies reporting a high percentage of users receiving specialized training was about half that number (see fig. 5). Even with the high overall percentages reported for users receiving training, inspectors general continued to identify weaknesses in their agency’s training program. Specifically, inspectors general for 17 of 24 major agencies cited weaknesses in their agency’s training programs example, 5 inspectors general reported that less than 90 percent of employees with log-in privileges had attended security awareness training in the last year. In addition, 11 inspectors general reported that less than 90 percent of employees, contractors, and other users with significant security responsibilities had attended specialized training in the past yea Inspectors general for 11 agencies also reported that identification and tracking of those with significant security responsibilities were not adequate. As a result, these agencies have less assurance that users are aware of the information security risks and their responsibilities for reducing such risks. r. FISMA requires each agency to have policies and procedures that ensure compliance with minimally acceptable system configuration requirement s, as determined by the agency. In fiscal year 2010 reporting, for the fir time, OMB required agencies to provide an estimated number of IT assets where an automated capability provides visibility into system configuration information and vulnerabilities. In addition, inspectors general were also requested to report on their agency’s configuration management program. Agencies varied in automated capabilities for monitoring their IT configurations and vulnerabilities. Specifically, 2 agencies reported ha an automated management system that allowed them to monitor the configurations for 90 to 100 percent of their assets; 8 reported being a to monitor configurations for 60 to 89 percent of their assets; and 14 reported being able to monitor less than 60 percent of their assets. Similarly, automated monitoring for vulnerabilities varied among agencies Four agencies were able to monitor 90 to 100 percent of their assets for vulnerabilities; 10 reported being able to monitor 60 to 89 percent of their onitor less than assets for vulnerabilities; and 10 reported being able to m 60 percent of their assets for vulnerabilities (see fig. 6). . Persistent governmentwide weaknesses in information security controls threaten the confidentiality, integrity, and availability of the information and information systems supporting the operations and assets of federal agencies. Inadequacies exist in access controls, which include identification and authentication, authorization, cryptography, audit and monitoring, boundary protection, and physical security. Weaknesses also exist in other controls such as configuration management, segregation of duties, and continuity of operations. These shortcomings leave federal agencies vulnerable to external as well as internal threats. As long as agencies have not fully and effectively implemented their information security programs, including addressing the hundreds of recommendations that we and inspectors general have made, federal systems will remain at increased risk of attack or compromise. The new reporting tool and metrics issued by OMB might improve the visibility of agencies’ future implementation of the act. The FISMA reporting process and new performance measures are intended to improve agencies’ information security programs, but the measures did not usually include performance targets. NIST, the inspectors general, and OMB have all taken actions toward fulfilling their FISMA requirements. However, deficiencies continued to be identified in agencies’ programs, such as training for personnel with significant responsibilities, remediation of security weaknesses, and resolving incidents in a timely manner. Weaknesses were also identified in new OMB-defined program categories, such as identity management and continuous monitoring. As such, information that agencies reported may not accurately reflect their implementation of required information security policies and procedures. Until hundreds of recommendations made by us and inspectors general are implemented and program weaknesses are corrected, agencies will continue to face challenges in securing their information and information systems. We recommend that the Director of the Office of Management and Budget take the following action: Incorporate performance targets for metrics in annual FISMA reporting guidance to agencies and inspectors general. We provided a draft of this report to OMB and DHS for their review. We received e-mail comments from an OMB representative. In response to our recommendation, OMB stated that since, unlike in previous years, OMB and DHS now issue separate memoranda regarding FISMA reporting guidance, it is more appropriate for the performance targets to be included in DHS’s memorandum since that is where the metrics are listed. We agree that including the performance targets in the metrics issued by DHS would meet the intent of our recommendation. In written comments, reproduced in appendix III, DHS's Director of the Departmental GAO/OIG Liaison Office, noted that he was pleased with GAO’s acknowledgement of efforts made by DHS to improve the cybersecurity posture of federal agencies. DHS also provided technical comments, which we have incorporated into this report as appropriate. We also provided a draft of this report to the seven other agencies included in our review (the Departments of Health and Human Services, the Interior, Justice, and Veterans Affairs; the National Institute of Standards and Technology; the Office of Personnel Management; and the U.S. Agency for International Development). All seven responded that they did not have any comments. We are sending copies of this report to the Director of the Office of Management and Budget and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. In accordance with the Federal Information Security Management Act of 2002 (FISMA) requirement that the Comptroller General report periodically to the Congress, our objectives were to evaluate (1) the adequacy and effectiveness of agencies’ information security policies and practices and (2) federal agencies’ implementation of FISMA requirements. To assess the adequacy and effectiveness of agencies’ information security policies and practices, we analyzed our related reports issued from July 2009 through March 2011. We also reviewed and analyzed the information security work and products of the Offices of Inspector General at the 24 major federal agencies covered by the Chief Financial Officers Act for fiscal years 2009 and 2010. Further, we reviewed and summarized weaknesses identified in our reports and those of inspectors general using the five major categories of information security general controls identified in our Federal Information System Controls Audit Manual: (1) access controls, (2) configuration management controls, (3) segregation of duties, (4) continuity of operations planning, and (5) agencywide information security programs. Further, we reviewed and analyzed data on information security in federal agencies’ performance and accountability and financial reports for fiscal year 2010. To assess the implementation of FISMA requirements, we reviewed and analyzed the provisions of the act and the FISMA data submissions for 24 major federal agencies for fiscal years 2009 and 2010. In addition, we reviewed the mandated annual FISMA reports from the Office of Management and Budget and the National Institute of Standards and Technology, as well as the Department of Homeland Security’s U.S. Computer Emergency Readiness Team report of security incidents for fiscal year 2010. We also examined the Office of Management and Budget’s reporting instructions and other guidance related to FISMA. To assess the reliability of the FISMA data, we selected 6 agencies to gain an understanding of the quality of processes in place to produce both chief information officer and inspectors general FISMA reports. To select these agencies, we sorted the 24 major agencies from highest to lowest using the total number of systems the agencies reported in fiscal year 2009; separated them into even categories of large, medium, and small agencies; then selected the median 2 agencies from each category. These agencies were: the United States Agency for International Development, the Department of the Interior, the Office of Personnel Management, the Department of Justice, the Department of Veterans Affairs, and the Department of Health and Human Services. We conducted interviews and performed limited testing with the inspectors general and agency officials from the selected agencies to determine the reliability of FISMA data submissions for 24 major federal agencies for fiscal years 2009 and 2010. We also accessed the CyberScope system to gain an understanding of the data, related internal controls, missing data, outliers, and obvious errors and reviewed supporting documentation that agencies provided to corroborate information provided in their responses. As appropriate, we interviewed officials from the Office of Management and Budget, the Department of Commerce’s National Institute for Standards and Technology, and the Department of Homeland Security. We did not evaluate the implementation of the Department of Homeland Security’s FISMA-related responsibilities assigned to it by the Office of Management and Budget. Based on this assessment, we determined that the data were sufficiently reliable for our work. We conducted this performance audit from September 2010 to October 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. FISMA states that the Director of the Office of Management and Budget (OMB) shall oversee agency information security policies and practices, including: developing and overseeing the implementation of policies, principles, standards, and guidelines on information security; requiring agencies to identify and provide information security protections commensurate with risk and magnitude of the harm resulting from the unauthorized access, use, disclosure, disruption, modification, or destruction of information collected or maintained by or on behalf of an agency, or information systems used or operated by an agency, or by a contractor of an agency, or other organization on behalf of an agency; overseeing agency compliance with FISMA; and reviewing at least annually and approving or disapproving, agency information security programs. FISMA also requires OMB to report to Congress no later than March 1 of each year on agency compliance with the requirements of the act. FISMA requires each agency, including agencies with national security systems, to develop, document, and implement an agencywide information security program to provide security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Specifically, FISMA requires information security programs to include, among other things: periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; risk-based policies and procedures that cost-effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each information system; subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate; security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in the information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. In addition, agencies must produce an annually updated inventory of major information systems (including major national security systems) operated by the agency or under its control, which includes an identification of the interfaces between each system and all other systems or networks, including those not operated by or under the control of the agency. FISMA also requires each agency to report annually to OMB, selected congressional committees, and the Comptroller General on the adequacy of its information security policies, procedures, practices, and compliance with requirements. In addition, agency heads are required to report annually the results of their independent evaluations to OMB, except to the extent that an evaluation pertains to a national security system; then only a summary and assessment of that portion of the evaluation needs to be reported to OMB. Under FISMA, the inspector general for each agency shall perform an independent annual evaluation of the agency’s information security program and practices. The evaluation should include testing of the effectiveness of information security policies, procedures, and practices of a representative subset of agency systems. In addition, the evaluation must include an assessment of the compliance with the act and any related information security policies, procedures, standards, and guidelines. For agencies without an inspector general, evaluations of non- national security systems must be performed by an independent external auditor. Evaluations related to national security systems are to be performed by an entity designated by the agency head. Under FISMA, the National Institute of Standards and Technology (NIST) is tasked with developing, for systems other than for national security, standards and guidelines that must include, at a minimum: (1) standards to be used by all agencies to categorize all their information and information systems based on the objectives of providing appropriate levels of information security according to a range of risk levels; (2) guidelines recommending the types of information and information systems to be included in each category; and (3) minimum information security requirements for information and information systems in each category. NIST must also develop a definition of and guidelines for detection and handling of information security incidents. The law also assigns other information security functions to NIST including: providing technical assistance to agencies on elements such as compliance with the standards and guidelines, and the detection and handling of information security incidents; evaluating private-sector information security policies and practices and commercially available information technologies to assess potential application by agencies; evaluating security policies and practices developed for national security systems to assess their potential application by agencies; and conducting research, as needed, to determine the nature and extent of information security vulnerabilities and techniques for providing cost-effective information security. In addition, FISMA requires NIST to prepare an annual report on activities undertaken during the previous year, and planned for the coming year, to carry out responsibilities under the act. In addition to the individual named above, Anjalique Lawrence (Assistant Director), Larry Crosland, Season Dietrich, Jennifer Franks, Nancy Glover, Min Hyun, Alina J. Johnson, Mary Marshall, Lee McCracken, Minette Richardson, and Jayne Wilson made key contributions to this report. | For many years, GAO has reported that weaknesses in information security can lead to serious consequences--such as intrusions by malicious individuals, compromised networks, and the theft of sensitive information including personally identifiable information--and has identified information security as a governmentwide high-risk area. The Federal Information Security Management Act of 2002 (FISMA) established information security program, evaluation, and annual reporting requirements for federal agencies. The act requires the Office of Management and Budget (OMB) to oversee and report to Congress on agency information security policies and practices, including agencies' compliance with FISMA. FISMA also requires that GAO periodically report to Congress on (1) the adequacy and effectiveness of agencies' information security policies and practices and (2) agencies' implementation of FISMA requirements. To do this, GAO analyzed information security-related reports and data from 24 major federal agencies, their inspectors general, OMB, and GAO. Weaknesses in information security policies and practices at 24 major federal agencies continue to place the confidentiality, integrity, and availability of sensitive information and information systems at risk. Consistent with this risk, reports of security incidents from federal agencies are on the rise, increasing over 650 percent over the past 5 years. Each of the 24 agencies reviewed had weaknesses in information security controls. An underlying reason for these weaknesses is that agencies have not fully implemented their information security programs. As a result, they have limited assurance that controls are in place and operating as intended to protect their information resources, thereby leaving them vulnerable to attack or compromise. In reports for fiscal years 2010 and 2011, GAO and agency inspectors general have made hundreds of recommendations to agencies for actions necessary to resolve control deficiencies and information security program shortfalls. Agencies generally agreed with most of GAO's recommendations and indicated that they would implement them. OMB, agencies, and the National Institute of Standards and Technology took actions intended to improve the implementation of security requirements, but more work is necessary. Beginning in fiscal year 2009, OMB provided agencies with a new online tool to report their information security postures and, in fiscal year 2010, instituted the use of new and revised metrics. Nevertheless, OMB's guidance for those metrics did not always provide performance targets for measuring improvement. In addition, weaknesses were identified in the processes agencies used to implement requirements. Specifically, agencies did not always ensure (1) personnel with significant responsibilities received training; (2) security controls were monitored continuously; (3) weaknesses were remediated effectively; and (4) incidents were resolved in a timely manner, among other areas. Until hundreds of recommendations are implemented and program weaknesses are corrected, agencies will continue to face challenges in securing their information and information systems. GAO is recommending that the Director of OMB provide performance targets for metrics included in OMB's annual FISMA reporting instructions to agencies and inspectors general. OMB stated it was more appropriate for those targets to be included in the performance metrics that are now issued separately by the Department of Homeland Security. GAO agrees that this meets the intent of its recommendation. |
Aerospace is a nonprofit mutual benefit corporation that provides scientific and technical support, principally general systems engineering and integration services, for the Air Force and other government agencies. Aerospace is headquartered in El Segundo, California, and has offices throughout the United States. The corporation, established in 1960, is governed by a 19-member Board of Trustees in accordance with its articles of incorporation and bylaws. FFRDCs are funded solely or substantially by federal agencies to meet special long-term research or development needs that cannot be met as effectively by existing in-house or contracting resources. One federal agency serves as the primary sponsor and signs an agreement specifying the purpose, terms, and other provisions for the FFRDC’s existence. Agreement terms cannot exceed 5 years but can be extended after a review of the continued use and need for the FFRDC. Federal regulations regarding FFRDC policy encourage long-term relationships between the federal government and FFRDCs to provide the continuity that will attract high-quality personnel. Aerospace’s primary sponsor is the Assistant Secretary of the Air Force for Acquisition. The Air Force Space and Missile Systems Center (SMC), located adjacent to Aerospace in El Segundo, has day-to-day management responsibility. SMC negotiates annual cost plus fixed-fee contracts with Aerospace. The DOD funding ceiling for Aerospace in fiscal year 1994 was $365.8 million. Aerospace employed 3,335 personnel at the end of fiscal year 1994 and had an annual payroll of about $208 million. Aerospace’s typical practice in establishing compensation levels is to recommend an annual salary adjustment to the Board of Trustees for final approval. According to company policy, such recommendations are to be based on an assessment of competitive salary positions, increases and rate structure adjustments at other aerospace industry firms, and other economic considerations. Compensation to Aerospace employees is primarily paid from government contracts, which represent over 99 percent of the company’s total business revenue. A small portion is paid out of government contract fees, nongovernment contracts and fees, interest income, and other sources. Aerospace compensation is reviewed by the Air Force for reasonableness during its annual contract negotiations. The Air Force routinely requests that DCAA review Aerospace’s proposed compensation costs and uses its recommendations during contract negotiations. Aerospace corporate officers, in addition to their annual salary, receive the standard benefit package that is available to all employees and several additional benefits that are available only to them. Standard benefits include social security contributions, a retirement plan, medical insurance, dental insurance, long-term disability insurance, and life insurance. Additional corporate officer benefits are a supplemental corporate officers’ retirement plan; personal use of a company automobile; airline upgrade coupons; and, in the case of two officers, a home security system. Table 1 summarizes the total fiscal year 1994 compensation provided to Aerospace’s 12 corporate officers and 20 senior managers based on actual benefits paid and salaries as of September 30, 1994. (See app. I for a further breakdown.) From September 1991 to September 1994, the average annual salary for all Aerospace executives (corporate officers and senior management personnel) increased by 23 percent, from about $125,000 to about $153,300. The average salary for corporate officers increased from about $135,100 to about $176,400, or 31 percent, and the average salary for senior managers increased from about $115,000 to about $139,500, or 21 percent. During those 3 years, the total cost of salaries for Aerospace executives increased by 78 percent, from about $2.75 million to about $4.91 million, primarily due to salary increases of up to 29 percent for individual executives during 1992 and a 45-percent increase in the number of executives from 22 to 32. Salaries included paid absences, such as vacations, holidays, and sick leave. Table 2 shows the number and total salaries of Aerospace executives as of September 1991 and September 1994 and the percent increase for both. Aerospace increased the average salary of its executives by about 18 percent in 1992, from about $132,900 to about $156,600. Most of this increase occurred by implementing a special, one-time increase in June 1992 that averaged 13 percent and by giving a merit pay increase in December 1992. (See app. II for more details.) Aerospace justified the June 1992 pay increase based on the need to bring its salaries more in line with industry salaries and resolve a pay compression problem that had developed between technical staff managers and their subordinates. Although Aerospace notified the Air Force 3 weeks before implementing the increase, the Air Force expressed concern that Aerospace did not present the salary increase and supporting documentation to the government in time to allow the Air Force to review the increase and determine its reasonableness. According to the Air Force, the salary increase represented a major change to Aerospace’s compensation package and the process used by Aerospace was inconsistent with SMC’s environment of trust and teamwork. Even though the Air Force allowed the salary increase, it requested a DCAA audit of Aerospace’s compensation and warned Aerospace that the government would request full reimbursement of any costs determined to be unreasonable. Aerospace told us that it believes that SMC’s environment of trust and teamwork has continued throughout this period and that it notified the Air Force immediately after its Board of Trustees approved the salary adjustment, which was based on a sound business position and the best information available at the time. Aerospace also noted that the FAR does not mandate prior contracting officer review; it only mandates that there will be no presumption of allowability when a contractor introduces major revisions of existing compensation plans and has not notified the contracting officer either before implementation or within a reasonable period after implementation. Aerospace maintained that the salary increase did not represent a major revision to its existing compensation plan. Further, Aerospace advised us that the salary adjustment occurred during fiscal years 1992-93, when it was confidently looking toward an increased budget and workload. Even though an unanticipated downturn in Aerospace employment occurred, the increase only restored salaries to market levels, in Aerospace’s view. However, Aerospace records provided to us showed that, before the June 1992 salary increase, Aerospace reduced its employment by 423 (272 technical staff and 151 nontechnical staff) through a reduction in force in November 1990 and a retirement incentive program in November 1991. No merit salary increases were given in December 1993, but 13 executives received additional salary increases since December 1992 through 13 promotions. From September 1991 to September 1994, Aerospace increased the number of its executives by 45 percent, from 22 to 32. During the same period, Aerospace nonexecutive employment decreased by about 17 percent, from 3,973 to 3,303. As a result, the ratio of executives to nonexecutive employment decreased from 1 per 181 employees to 1 per 103 employees. Aerospace did not, and is not required by the past or current contracts to, obtain Air Force approval for changing the number of executives. Table 3 compares the number of executives and the total number of employees since September 1991. Aerospace gave us many reasons for increasing the number of executives, including satisfying customer requirements and customer reorganizations and its continuing efforts to improve customer support. For example, when the SMC chief engineer position was expanded to emphasize horizontal engineering and integrated product teams, Aerospace said it added a corporate chief engineer to interface with SMC’s chief engineer. Also, it said that a general manager position was created in its Colorado division to improve support to the U.S. Space Command and the Air Force Space Command and consolidate seven different Aerospace organizational units at that division. Aerospace cited other factors, such as more robust succession planning and creating a senior manager position to better distinguish FFRDC and non-FFRDC activities, in response to recent congressional focus on FFRDC activities. It also concluded that increasing the number of senior managers would increase the leverage of the corporate officers. Aerospace noted that, even though the number of executives increased, the total number of managers decreased and the cost per member of the technical staff decreased. Aerospace further concluded that the pressure to downsize programs required adding some higher level managers with a broader perspective to support the Air Force and that the total number of employees decreased because of funding ceilings, not workload. In 1993, Aerospace paid two executives hiring bonuses of $30,000 each. Aerospace informed us that the hiring bonuses were needed, over and above an initial annual salary of $155,000, to hire these two individuals and that the special circumstances of each offer were reviewed and approved by Aerospace’s Board of Trustees. Aerospace initially reported these two hiring bonuses as government-reimbursable costs in June and October 1993. Subsequently, Aerospace reclassified these costs as nonreimbursable expenses in accordance with the FAR in July 1994 and December 1993, respectively. Aerospace commented that these were one-time bonuses that were paid in only two special cases for employees that had successfully discharged important responsibilities. After the Air Force’s request in June 1992, DCAA initiated a review of Aerospace’s compensation. On December 9, 1993, DCAA issued its report, which was subsequently revised three times. DCAA compared Aerospace positions with comparable compensation market survey data and used FAR criteria to initially conclude that Aerospace had provided $616,846 and $4,092,954 in unreasonable compensation for fiscal years 1992 and 1993, respectively. The FAR criteria call for general conformity with compensation practices of other firms of the same size, in the same industry, and in the same geographic area that are engaged in predominantly nongovernment business. Aerospace objected to the comparable compensation market survey DCAA used because it included a number of industries and corporations that Aerospace judged had no comparability to the technical education and experience of the FFRDC staff. Aerospace’s objection was that the FAR provides, in part, that the relevant fact is the general conformity with the compensation practice of other firms of the same size, industry, and geographic area. Aerospace noted that its own compensation survey included companies with which it competes for scientific and engineering talent. Aerospace also objected to the market compensation survey data used by DCAA because it did not include bonuses and other monetary compensation of the comparison group. Since Aerospace officers do not receive performance bonuses, Aerospace informed DCAA in December 1993 that all renumeration must be used for a valid comparison. DCAA agreed and, as a result of using additional data, issued a revised report in January 1994, which no longer questioned the reasonableness of corporate officers’ salaries and reduced the compensation costs considered unreasonable for other employees. DCAA made further revisions to its report in February and March 1994 to (1) use more current compensation market survey data; (2) challenge, rather than classify as unreasonable, corporate officer fringe benefit costs because Aerospace had not performed a fringe benefit market survey to justify the costs; and (3) adjust the amount of unreasonable compensation to include only that portion of fringe benefits costs that were determined based on a percentage of base salaries. DCAA’s fourth and final memorandum reduced the costs classified as unreasonable compensation to $306,809 and $1,788,612 for fiscal years 1992 and 1993, respectively. DCAA informed us that these revisions were done after consideration of additional, more current information. In addition, the final report also challenged $2,124,291 for fiscal year 1993 due to the lack of adequate supporting documentation. DCAA’s final memorandum also stated that Aerospace should have provided the government an opportunity to review the reasonableness of the June 1992 increase. DCAA concluded that it was unreasonable for Aerospace to increase salaries by a significant percentage at a time when other industries were implementing cost-saving measures and planning smaller salary increase budgets in response to DOD downsizing and other economic conditions that have resulted in major cutbacks of employees. Appendix III summarizes the four DCAA products. Aerospace objected to each of DCAA’s products, including the final one. It concluded that, despite some improvements, DCAA still used inappropriate data and reached erroneous conclusions. Aerospace also said that DCAA’s statements were unsupported opinions and that its actions to redress the then-existing salary situation were entirely reasonable. In addition, Aerospace stated that it had complied with FAR requirements by providing the government with notice of the salary increase before implementation. After the DCAA compensation audit reports and subsequent fiscal year 1994 contract negotiations between the Air Force and Aerospace, additional provisions were placed in the fiscal year 1994 contract with Aerospace for determining reasonable technical staff compensation costs. First, the Air Force and Aerospace agreed that about $1.4 million of Aerospace’s billings would not be paid until the Air Force determined the reasonableness of the cost of Aerospace’s supervisory and nonsupervisory technical staff salaries. To assist the Air Force in making this determination, Aerospace was to provide current and accurate job descriptions and use a compensation market survey agreed to by the government. Second, Aerospace was to commission an independent survey to establish a reasonable executive fringe benefit level. Third, to resolve the notification issue, Aerospace is to notify the contracting officer at least 60 days before announcing any major salary adjustment that was not planned or included in the estimated contract cost. As of December 1994, Aerospace and government contracting representatives were in the process of implementing these contractual provisions and clarifying the computation of Aerospace executive fringe benefits. According to the Air Force, the compensation market survey has been completed and the results are being reviewed. DOD’s FFRDCs are privately operated contractors of the United States, and the salaries of officers and employees have not generally been subject to federal government pay scales. However, the Congress has at times restricted the use of DOD appropriations to pay compensation of FFRDC officers or employees over certain levels and has imposed notice requirements concerning certain payments. In the Fiscal Year 1995 DOD Appropriations Act, the Congress placed a limit on defense FFRDC compensation after July 1, 1995. The act states that no employee or executive officer of a defense FFRDC can be compensated from DOD appropriations at a rate exceeding Executive Schedule Level I. The act’s legislative history indicates that the July 1, 1995, date was selected to allow individuals affected by the compensation limitation to adjust to its impact. As of September 30, 1994, there were 16 Aerospace executives with annual salaries of more than $148,400, the current Executive Schedule Level I salary amount. The National Defense Authorization Act for Fiscal Year 1995 requires, in part, that DOD funds may not be paid to an FFRDC unless it enters into an agreement with DOD that no officer or employee who is compensated at an annual rate that exceeds Executive Schedule Level I will be compensated in fiscal year 1995 at a higher rate than in fiscal year 1994 and that no such officer or employee will be paid a bonus or provided any other financial incentive in fiscal year 1995. This act also requires the DOD Inspector General to review compensation paid by FFRDCs to all officers and employees who are paid at a rate exceeding the Executive Schedule Level I rate. According to the act, the Inspector General is to (1) assess the validity of the data submitted by FFRDCs, justifying salaries that exceed the Executive Schedule Level I rate; (2) compare the compensation paid to those individuals exceeding that rate with the compensation of similar technical and professional staff from profit and nonprofit organizations that must compete for defense work and with government officials of comparable expertise and responsibility; and (3) examine other appropriate forms of nonsalary compensation, such as bonuses and retirement plans. The results of the Inspector General’s review are to be reported to the Senate and House Committees on Armed Services no later than May 1, 1995. We are also reviewing compensation at FFRDCs sponsored by DOD, as required by the Fiscal Year 1992 Defense Appropriations Conference Report. This review will collect data on compensation for selected professional, technical, and managerial employees, not just the highest paid executives, as discussed in this report. To determine the compensation Aerospace provided corporate officers and senior managers, we reviewed Aerospace personnel and payroll records, Board of Trustees’ minutes and resolutions, contract documents, accounting records related to executive benefit costs, and policies and procedures that relate to Aerospace’s compensation program. We also reviewed DCAA compensation audit reports, supporting workpapers, and Aerospace’s responses to the audit reports. In addition, we met with Aerospace’s compensation and benefits officials, Air Force program and contract administration officials responsible for overseeing the work at Aerospace, and cognizant DCAA officials. We conducted our work from April to December 1994 in accordance with generally accepted government auditing standards. As agreed with your office, we did not obtain written agency comments on a draft of this report. However, we discussed our results with officials from DOD and Aerospace and included their comments where appropriate. We are sending copies of this report to the Secretary of Defense; the Director, Office of Management and Budget; the Administrator, Office of Federal Procurement Policy; and other interested congressional committees. Copies will also be available to others on request. Please contact me at (202) 512-4587 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IV. Fiscal year 1992 costs (unreasonable compensation) Fiscal year 1993 costs (unreasonable compensation) Positions with no position descriptions available Not applicable. Odi Cuero Benjamin H. Mannen Ambrose A. McGraw The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed executive compensation at The Aerospace Corporation, which operates an Air Force-sponsored federally funded research and development center (FFRDC). GAO found that: (1) as of September 1994, Aerospace employed 32 senior management personnel (referred to as Aerospace executives), 12 of which were corporate officers; (2) the officers' total annual compensation averaged about $240,200 and their annual salary averaged about $176,400; (3) corporate officers' benefits included a retirement plan that is not available to senior management personnel or other employees; (4) the total annual compensation for the other 20 Aerospace executives that were not corporate officers averaged about $154,300 and their annual salary averaged about $139,500; (5) from September 1991 to September 1994, total salary cost for all Aerospace executives increased by 78 percent, from about $2.75 million to about $4.91 million, primarily due to salary increases of up to 29 percent for individual executives during 1992 and a 45-percent increase in the number of executives from 22 to 32 between 1991 and 1994; (6) during that time, the average annual executive salary increased by 23 percent, from about $125,000 to about $153,300; (7) Aerospace officials told GAO that increasing both the salaries and the number of executives were sound management decisions based on the best information available at the time and were justified for a number of reasons, including to better align Aerospace with its customers; (8) in addition, in 1993, Aerospace paid two executives hiring bonuses of $30,000 each. Aerospace classified these bonuses as nonreimbursable costs, consistent with the Federal Acquisition Regulation (FAR); (9) between September 1991 and September 1994, the number of nonexecutive employees at Aerospace decreased by 17 percent, from 3,973 to 3,303; (10) this decrease, coupled with the increase in the number of executives, reduced the ratio of executives to total nonexecutive employees from 1 per 181 employees to 1 per 103 employees; (11) in an audit started in response to Aerospace's June 1992 salary increases, the Defense Contract Audit Agency (DCAA) initially questioned the reasonableness of corporate officers' salaries and fringe benefits; (12) in its final report, however, DCAA no longer questioned the reasonableness of corporate officers' salaries but recommended that Aerospace provide further support for corporate officers' fringe benefits; (13) the Air Force and Aerospace have been working to resolve the issues raised by DCAA's audit; (14) fiscal year 1995 appropriations legislation limits employee compensation at the Department of Defense (DOD) FFRDCs, effective July 1, 1995, to a rate not to exceed Executive Schedule Level I; and (15) as of September 30, 1994, 16 Aerospace executives had annual salaries of more than $148,400, the current Executive Schedule Level I salary amount. |
Federal law provides states with flexibility in how they operate their CHIP programs and how states implement more recent coverage options under PPACA. For example, states may operate CHIP as a separate program, include CHIP-eligible children in their Medicaid programs, or use a combination of the two approaches. States with separate CHIP programs may modify certain aspects of their programs, such as coverage and cost-sharing requirements.regulations require states’ separate CHIP programs to include coverage However, federal laws and for routine check-ups, immunizations, inpatient and outpatient hospital services, and dental services defined as “necessary to prevent disease and promote oral health, restore oral structures to health and function, and treat emergency conditions.” In addition, CHIP premiums and cost- sharing may not exceed maximum amounts as defined by law. Similarly, PPACA provides states with flexibility in how they opt to implement certain coverage options included in the law. For example, PPACA allows states to expand eligibility for Medicaid to most non-elderly, non-pregnant adults who are not eligible for Medicare and whose income is at or below 133 percent of the FPL. As of January 2015, 29 states have implemented this expansion. PPACA required the establishment of health insurance exchanges by January 1, 2014, to allow consumers to compare individual health insurance options available in each state and enroll in coverage. In states electing not to operate their own exchange, PPACA required the federal government to establish and operate an exchange in the state, referred to as a federally facilitated exchange. States with federally facilitated exchanges may enter into a partnership with HHS to assist with the operation of certain exchange functions. As such, a state could establish the exchange (referred to as a state-based exchange), cede the responsibility entirely to HHS (referred to as a federally facilitated exchange), or enter into a partnership with HHS (referred to as a partnership exchange). As of January 2015, 17 states established state-based exchanges, 27 states were using the federally facilitated exchange, and 7 states established partnership exchanges.See fig. 1 for information on the variation in children’s uninsured rates, CHIP characteristics, and coverage approaches under PPACA by state; and see appendix I for the information in tabular form. The Children’s Health Insurance Program Reauthorization Act of 2009 (CHIPRA) included provisions aimed at improving the information available from states on the quality of health care furnished to children in both CHIP and Medicaid. Specifically, CHIPRA required the Secretary of HHS to conduct an independent evaluation of CHIP and to submit the results to Congress. The mandated evaluation, for which the final report was issued in August 2014, documents what is known about CHIP; explores the program’s evolution since inception; and examines the role CHIP has played in covering low-income children. In addition, CHIPRA required HHS to identify quality measures, known as the Child Core Set measures, to serve as a tool for states to use to monitor and improve the quality of health care provided to children enrolled in CHIP and Medicaid. CHIPRA also required HHS to develop a standardized format for states to voluntarily report these measures. These measures assess the quality of care provided through CHIP and Medicaid, and include a range of health conditions, such as asthma, obesity, attention deficit hyperactivity disorder, and perinatal care. (See table 1.) In 2013, as required by CHIPRA, HHS began annually publishing recommended changes to the Child Core Set measures in an effort to improve upon the measures and align them with national quality measurement activities, which can result in changes to the number of measures. With the use of state reported data on the Child Core Set, HHS conducts an annual assessment and publishes its findings in its annual quality report, as required under CHIPRA. States report CHIP service utilization and other measures through systems developed by HHS; specifically, the CHIP Annual Reporting Template System (CARTS), a web-based data submission tool, and through the Form CMS-416, an annual report submitted by states on the Medicaid Early and Periodic Screening, Diagnostic, and Treatment (EPSDT) benefit provided for enrolled children. States that use managed care plans to deliver CHIP benefits are also required to report outcomes and performance measures from External Quality Review Organizations and performance improvement projects. In February 2010, CMS awarded 10 grants—which funded 18 states to implement projects that include using quality measures to improve child health. CMS also provided funding to AHRQ to lead a national evaluation of these demonstrations, to be completed by September 30, 2015. electronic health records, and (5) assessing the utility of other innovative approaches to enhance quality. Available assessments of national data we reviewed identify positive effects of CHIP, including a reduction in the rate of uninsured children and children’s improved access to care, and these findings are often consistent with our prior work. HHS also has ongoing efforts to enhance state reporting of the Child Core Set measures and publishes data from these quality measures to identify areas for improving the care provided in CHIP. HHS’s mandated evaluation identified several positive effects of CHIP across states, particularly with regard to children who are uninsured. For example, based on an analysis of data from the Current Population Survey Annual Social and Economic Supplement (CPS-ASEC), the evaluation reported that Medicaid and CHIP contributed to the decline in the national rate of uninsured children between 1997 and 2012, with coverage rates improving for all ethnic and income groups.notably, coverage rates for Hispanic children increased dramatically, rising from 42 percent to 65 percent during this time. Changes to state CHIP programs also contributed to the decline in the national uninsured rates among children. For example, many states expanded CHIP coverage by raising upper income eligibility limits and covering newly eligible groups, such as immigrant children who have resided legally in the United States for less than 5 years, which was newly permitted under Most federal law. In addition, state outreach and enrollment activities also reduced the number of children eligible for—but not enrolled in—Medicaid or CHIP by about 1.2 million from 2008 to 2012. To determine whether factors other than insurance coverage may affect differences in responses about obtaining care or utilization of health care services, the mandated evaluation controlled for age, sex, race/ethnicity and language groups, more than three children in the household, highest education of any parent, parents’ employment status, parent citizenship, and local area or county. CHIP enrollees were an estimated 38 percentage points more likely to have a usual source of dental care, and were an estimated 39 percentage points more likely to have had a dental check-up in the past year; CHIP enrollees were an estimated 25 percentage points more likely to have an annual well-child checkup visit, and were more likely to receive a range of health services, including mental health visits, specialty care, and prescription drugs; CHIP enrollees were more likely to receive most preventive care measures, including flu vaccinations, vision screenings, and height and weight measurements; and parents of CHIP enrollees were less likely to report having trouble paying their child’s medical bills, and were substantially more confident in their ability to get needed health care for their child. Based on our assessment of HHS’s Medical Expenditure Panel Survey from 2007 through 2010, we also found that children enrolled in CHIP have better access to care and service use than children who are uninsured. In particular, when compared with uninsured children, we found that CHIP enrollees fared better, and the differences we identified were statistically significant in most cases. For example, a higher proportion of CHIP respondents reported having a usual source of care; ease in getting a person the care, tests, or treatment that the parent or a doctor believed necessary; and ease in seeing a specialist; and using certain health care services, including office-based provider visits, outpatient department provider visits, and dental care visits. When the mandated evaluation compared CHIP enrollees with the privately-insured group, it also found that CHIP enrollees experienced comparable access and service use for many, but not all, measures, and that parents of children enrolled in CHIP experienced less financial burden in paying their children’s medical bills. CHIP enrollees used a similar level of preventive care and other health care services; however, CHIP enrollees had higher usage of prescription medication and lower levels of emergency department visits and hospital stays. CHIP enrollees had similar rates of health and development screenings, but were 9 percentage points less likely to receive a flu vaccination. CHIP enrollees had higher rates of dental access and utilization of certain services. For example, 92 percent of CHIP enrollees reported having access to dental coverage in 2012, compared with 77 percent of privately insured children. In terms of utilization, 84 percent of CHIP enrollees reported having a dental checkup or cleaning in the previous 12 months compared with 79 percent of privately insured children. Parents of CHIP enrollees reported substantially less trouble paying their children’s medical bills and had much lower out-of-pocket spending levels. For our assessment of the Medical Expenditure Panel Survey, we also compared CHIP enrollees’ access and service use with children who were privately-insured, and our findings were consistent with some of the findings in the mandated evaluation. When asked about access to care, we found that respondents with children enrolled in CHIP reported experiences that were generally comparable with privately insured children for 5 of the 6 measures reviewed, including having a usual source of care; ability to make needed appointments; and ease in seeing a specialist. Respondents’ reported ease in getting needed care was the only measure for which we identified a statistically significant difference. CHIP families faced a lower financial burden than families with private insurance because of the federal requirement that states’ CHIP programs may not impose premiums and cost-sharing that, in the aggregate, exceed 5 percent of a family’s total income for the length of the child’s eligibility period. However, with regard to the utilization of certain services, our prior work is less consistent with the findings of HHS’s mandated evaluation. For example, when asked about their use of certain medical and dental services, we found that access to care for CHIP enrollees was lower than that of the privately-insured for several services, and these differences were often statistically significant. Specifically, we previously reported that a lower proportion of CHIP enrollees reported visiting dentists (42.4 percent compared with 50.9 percent) and orthodontists (4.9 percent compared with 11.2 percent) within the past 12 months than did those who were privately insured; and a higher proportion of CHIP enrollees reported having an emergency room visit (14.1 percent compared with 10.4 percent). Differences between our findings and those included in the national evaluation may be related to the timeframes of the data and the measures used. For example, some of the data used in our analyses predate the CHIPRA requirement that CHIP programs offer comprehensive dental benefits coverage beginning in 2009. The timeframes for both bodies of work also predate the implementation of the PPACA requirement that most individual and small group market health plans provide pediatric dental coverage. Finally, while the mandated evaluation noted that, overall, CHIP programs were meeting the health care needs of most enrollees, it identified areas for program improvement. Specifically, many CHIP enrollees did not receive recommended preventive care or reported an unmet health care need. For example, slightly less than half of CHIP enrollees received a flu vaccination, and only about one-third received a developmental screening for children under age 6. In addition, one in four CHIP enrollees had an unmet health care need, with the highest unmet need being for dental care. HHS publishes data that states report on the Child Core Set measures in its annual quality report. While reporting on the Child Core Set measures is voluntary for states, the number of states reporting these quality measures and the median number of measures each state reports has increased steadily since reporting of the measures began in 2010. For example, beginning in fiscal year 2012, all 51 states have reported at least two or more measures, a notable increase from the 43 states that reported at least one measure for fiscal year 2010. Similarly, the median number of Child Core Set measures that states report has increased from about 7 measures in fiscal year 2010 to 16 measures in fiscal year 2013. HHS attributed the rise in state reporting to increased familiarity with the Child Core Set measures and the department’s efforts to streamline state reporting and provide technical assistance and guidance to states. For example, CMS established a Quality Measures Technical Assistance and Analytic Support Program in May 2011, which works with states to support their efforts in collecting, reporting, and using quality measures for their CHIP and Medicaid programs. However, states varied considerably in the number of measures they reported in fiscal year 2013, ranging from 2 measures in Nebraska and Wisconsin to 25 measures in North Carolina and South Carolina. (See fig. 2.) Several factors can affect a state’s ability to report the Child Core Set measures. Officials from the states we reviewed provided the following examples of challenges they face reporting the Child Core Set measures. Mississippi and Pennsylvania officials cited difficulty reporting certain measures, such as the extent of follow-up care for children prescribed medication for attention-deficit/hyperactivity disorder, due, in part, to their not having access to the data required to report the measure. Arizona, New Hampshire, Nevada, and Wisconsin officials cited the difficulty and cost of reporting certain measures, in particular those measures that require medical record reviews as opposed to the reporting of measures that use only encounter data. For example, HHS suggests that medical record reviews be used to calculate a perinatal measure related to the performance of caesarean sections and none of these states reported this measure in fiscal year 2013. Rhode Island officials noted that it can be difficult to collect data for measures that are not nationally endorsed—and as a result, they may not report them. For example, in fiscal year 2013, Rhode Island did not report the Child Core Set measure of a developmental screening in the first 3 years of a child’s life, which had not been endorsed by the National Committee for Quality Assurance, but developed by a university in Oregon. Noting that the state does not have a department dedicated to measuring quality, Alaska cited a lack of internal expertise needed to collect and report reliable data for the measures. As such, an official cited the need to leverage resources and work with other agencies within the state that have the expertise to analyze measures and set targets for quality improvement. In light of difficulties cited by states in reporting on the Child Core Set measures, HHS reported ongoing efforts to assist states with reporting the measures. For example, to streamline state reporting, HHS began calculating three Child Core Set measures on behalf of states in fiscal year 2012. Specifically, HHS began calculating the preventive dental and dental treatment measures from the Form CMS-416. At this time, HHS also began using data available from the Centers for Disease Control and Prevention to calculate the neonatal central-line associated blood stream infection measure. In addition, HHS assists states by allowing states to report Child Core Set measures for the Medicaid population, CHIP population, or combined Medicaid and CHIP populations. Additionally, HHS reported efforts to assist states in improving their reporting of the Child Core Set measures through the Quality Demonstration Grant Program. Through this program, HHS awarded 10 grants providing funding to 18 states to implement various projects to improve the information available on the quality of care provided to children enrolled in CHIP, including undertaking efforts to improve their reporting of the Child Core Set measures. For example, in one such project, Pennsylvania is testing the use of financial rewards to encourage certain health systems—which include hospitals, primary care practice sites, and other facilities—to use the Child Core Set measures to drive Pennsylvania also reported that it is quality improvement projects.recruiting health systems to determine the extent to which electronic health records can provide data for the Child Core Set measures for children. From the measures submitted by states, HHS reports states’ performance to assess the quality of care for children enrolled in CHIP and Medicaid, and the results of this assessment are mixed. HHS calculates mean rates for most of the Child Core Set measures—which it calls performance rates—including primary and preventive care, perinatal health, management of acute and chronic conditions, and dental services, among the states reporting those measures. Based on this assessment, HHS determined that states had high performance rates for some measures, such as young children’s access to primary care. For example, a mean of 96 percent of children aged 12 to 24 months enrolled in CHIP or Medicaid had at least one primary care physician visit during fiscal year 2013. In contrast, states had lower performance rates for other measures. For example, a mean of 46 percent of children received at least one preventive dental service, and a mean of 25 percent of children (See table 2.) received at least one dental treatment in fiscal year 2013.As such, HHS specified that children’s access to oral health care continues to be a primary focus of improvement efforts in CHIP and Medicaid. In addition to HHS’s review of states’ reporting on the Child Core Set measures, the department’s annual quality report includes the results of its review of external quality review reports and performance improvement projects from states that contract with managed care plans to deliver services for CHIP and Medicaid enrollees. States are required to annually review their managed care plans to evaluate the quality, timeliness, and access to services that the plans provide to enrollees, and HHS must include this information in its annual quality report. For the most recent annual quality report, 40 of the 42 states that contract with managed care plans to deliver services to CHIP and Medicaid enrollees Based on its review of these submitted external quality review reports.reports, HHS found that the most frequently reported performance measures from states’ external quality reports—which included well-child care, primary care access, childhood immunization rates, and prenatal/postpartum care—mirrored states’ most frequently reported Child Core Set measures in fiscal year 2013. In terms of HHS’s review of states’ performance improvement projects, 38 of the 40 states that submitted external quality review reports included at least one project targeted to improve the quality of care for children and pregnant women enrolled in managed care; for example, by implementing projects related to behavioral health and improving childhood immunization rates for children, and prenatal and postpartum care for pregnant women. Our prior work has identified important considerations related to cost, coverage, and access when determining the ongoing need for CHIP, many of which were echoed by officials from the 10 states we reviewed. With regard to cost, our prior work comparing CHIP plans to states’ benchmark plans, which were the models for health plans available under health insurance exchanges established under PPACA, found that costs—defined as deductibles, copayments, coinsurance, and premiums—were almost always less for CHIP plans. CHIP plans we reviewed typically did not require the payment of deductibles, while all five states’ benchmark plans did. The cost difference in copayments between CHIP plans and benchmark plans was considerable for physician visits, prescription drugs, and outpatient therapies. For example, an office visit to a specialist in Colorado would cost a CHIP enrollee $2 to $10 per visit, depending on their income, compared to $50 per visit for benchmark plan enrollees. Families could face higher dental costs in states where dental coverage through the exchange is optional and offered as a stand- alone dental plan (SADP) as opposed to CHIP plans where dental benefits are included. Officials from five selected states also expressed concerns about the higher costs of QHP coverage and the implications this would have for families. See GAO-14-40. The five states evaluated in our prior work were Colorado, Illinois, Kansas, New York, and Utah. These findings were subsequently discussed in a hearing before the Subcommittee on Health, Committee on Energy and Commerce, House of Representatives on December 3, 2014. See GAO, Children’s Health Insurance: Cost, Coverage, and Access Considerations for Extending Federal Funding. GAO-15-268T (Washington, D.C.: Dec. 3, 2014); We are currently examining how CHIP coverage and consumer costs compare to selected QHPs that were available on the exchanges in these five states in 2014. Based on a review of QHPs available on the state’s exchange in 2014, Nevada officials estimated that the average annual premium for a child in a family with an income of 168 percent of the FPL was more than two and a half times higher than the $200 premium for coverage in a CHIP plan. This price difference does not account for differences in co-pays, which the state does not charge under CHIP. The extent to which QHPs in the state apply co-pays to covered services could increase this price differential further. As such, Nevada officials were concerned that absent CHIP, families would not purchase QHP plans due to their higher cost. Due to the additional premiums and cost-sharing associated with SADPs, New Hampshire officials expressed concern that families will forego dental care if they must purchase a SADP. The officials noted that cost-sharing particularly affects families with incomes from 185 to 250 percent of the FPL, which is 75 percent of the state’s CHIP population. We also previously reported that coverage is a relevant consideration, and that separate CHIP and benchmark plans were generally similar in terms of their coverage of selected services and the services on which they imposed limits, with some variation. For example, the plans we reviewed were similar in that they typically did not impose any limits on ambulatory patient services, emergency care, preventive care, or prescription drugs; but commonly imposed limits on outpatient therapies, and pediatric dental, vision, and hearing services. Officials from several selected states pointed out that CHIP coverage was more comprehensive than QHPs for certain services, particularly for services needed by children with special health care needs. Alaska and Pennsylvania officials noted that coverage of services— including orthodontics, vision, audiology, outpatient therapies, language disorders, and durable medical equipment—was more comprehensive in CHIP when compared with QHPs in their states. Rhode Island officials highlighted the state’s coverage of comprehensive pediatric dental services and any medically necessary services deemed warranted as a result of the EPSDT benefit to which all CHIP-eligible children in the state are entitled. According to the state officials, these same services are either unavailable or unaffordable through QHPs in the state. Arizona officials specified that coverage of certain enabling services, such as non-emergency medical transportation, family support services, and behavioral health services are included in the state’s CHIP plan, but may not be offered in QHPs. With regard to access, our work found that CHIP enrollees generally reported positive responses in their ability to obtain care that was generally comparable to those with private insurance, with some exceptions, including lower utilization of dental and orthodontia services. Some of the states we reviewed also raised concerns related to access to care if CHIP funding is not reauthorized. For example, Nevada officials raised concerns about the ability of certain populations—specifically, children of undocumented parents—to access care if CHIP is no longer available. Nevada officials stated that these children could lose CHIP coverage since a significant portion of them have parents who may not file federal income tax returns that would expose them to tax penalties for failing to enroll their children in alternative health coverage. In addition, an Alaska official noted the need for further work on the comparability of benefits between QHPs and CHIP to ensure that the former could be an adequate substitute, and that children moving to QHPs would not experience decreased access to health care. The official noted that comparability across benefit packages is particularly important for children in households whose income changes would result in movement between CHIP and QHPs. We provided a draft of this report to HHS for comment. The department provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from its date. At that time, we will send copies to the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Act (PPACA) Act (PPACA) Act (PPACA) As of February 2015, 42 states operated separate CHIP programs (2 states had a separate CHIP program only, and 40 states covered CHIP children through both a separate program and an expansion of their Medicaid program). The other 9 states covered CHIP children through an expansion of their Medicaid program, which we refer to as a “CHIP Medicaid expansion.” Minnesota and New Mexico have CHIP income eligibility levels that vary by age group; therefore, we reported the highest income eligibility level reported for these states—which are ages 0 to 1 year in Minnesota and ages 0 to 5 years in New Mexico. These state-based marketplaces use the federally facilitated marketplace’s information technology platform for applicants to apply and enroll in their respective states. In addition to the contact named above, Susan T. Anthony, Assistant Director; Sandra George; Seta Hovagimian; Drew Long; JoAnn Martinez-Shriver; Vikki Porter; Lisa Rogers; Eden Savino; Laurie F. Thurber; and Kate Tussey made key contributions to this report. Health Care Transparency: Actions Needed to Improve Cost and Quality. GAO-15-11. Washington, D.C.: October 20, 2014. Children’s Health Insurance: Cost, Coverage, and Access Considerations for Extending Federal Funding. GAO-15-268T. Washington, D.C.: December 3, 2014. Children’s Health Insurance: Information on Coverage of Services, Costs to Consumers, and Access to Care in CHIP and Other Sources of Insurance. GAO-14-40. Washington, D.C.: November 21, 2013. Children’s Health Insurance: Opportunities Exist for Improved Access to Affordable Insurance. GAO-12-648. Washington, D.C.: June 22, 2012. Medicaid and CHIP: Most Physicians Serve Covered Children but Have Difficulty Referring Them for Specialty Care. GAO-11-624. Washington, D.C.: June 30, 2011. Medicaid and CHIP: Given the Association between Parent and Child Insurance Status, New Expansions May Benefit Families. GAO-11-264. Washington, D.C.: February 4, 2011. State Children’s Health Insurance Program: CMS Should Improve Efforts to Assess whether SCHIP Is Substituting for Private Insurance. GAO-09-252. Washington, D.C.: February 20, 2009. Health Insurance For Children: Private Insurance Coverage Continues to Deteriorate. GAO/HEHS-96-129. Washington, D.C.: June 17, 1996. | CHIP is a joint federal-state program that finances health insurance for over 8 million children. Since the program's inception, the percentage of uninsured children nationwide has decreased by half, from 13.9 percent in 1997 to 6.6 percent in the first three months of 2014. This year, Congress will decide whether to extend CHIP funding beyond 2015. GAO was asked to provide information on the effect of CHIP on children's coverage, and what key issues may be considered in determining the ongoing need for CHIP. In this report, GAO examines (1) what assessments of CHIP suggest about its effect on children's health care coverage and access; and (2) what key issues identified by GAO's work the Congress may wish to consider in determining whether to extend CHIP funding. For the assessments of CHIP's effect, GAO reviewed reports on CHIP, including a mandated evaluation and annual HHS reports on quality, which publish data that states report on Child Core Set measures, which are quality measures identified by HHS that states can use to monitor health care provided to children in CHIP and Medicaid. GAO also reviewed relevant federal statutes and regulations. To identify key issues that the Congress may wish to consider, GAO reviewed its own relevant reports and testimony; reviewed letters from state governors regarding CHIP; and interviewed CHIP officials in 10 states, which were selected based on variation in location, program size, and design. HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate. Assessments of national data GAO reviewed identify positive effects of the State Children's Health Insurance Program (CHIP), and the quality measures reported by states help identify areas needing improvement. A mandated evaluation of CHIP published in 2014 noted that CHIP enrollees (1) had substantially better access to care, service use, and preventive care when compared with uninsured children; and (2) experienced comparable access and service use when compared with privately insured children. These findings are generally consistent with prior GAO work, which used national survey data to compare CHIP enrollees' access and service use with children who were uninsured or privately insured. When comparing CHIP enrollees with privately insured children, the mandated evaluation and prior GAO work differed regarding the utilization of certain services, such as emergency room use and dental services, which may be due to differences in when the data were collected and the particular measures that were used. The Department of Health and Human Services (HHS) also publishes data on quality measures that states voluntarily report annually. These Child Core Set measures show mixed results regarding service utilization among CHIP and Medicaid enrollees. For example, states reported that nearly all children aged 12 to 24 months enrolled in CHIP or Medicaid had at least one primary care physician visit during fiscal year 2013. However, states reported that far fewer children obtained dental prevention or treatment services, with a mean of 46 percent of children receiving a preventive dental service, and a mean of 25 percent receiving dental treatment services. HHS officials said that they use these data to help identify areas for improvement in the care provided in CHIP and Medicaid. GAO's prior work has identified important issues related to cost, coverage, and access that Congress may wish to consider when determining the ongoing need for CHIP, many of which were similar to issues raised by officials from the 10 states GAO reviewed. With regard to cost, GAO's prior work found that costs—defined as deductibles, copayments, coinsurance, and premiums—were almost always less for selected CHIP plans when compared with states' benchmark health plans, which were the models for health plans available in health insurance exchanges established under the Patient Protection and Affordable Care Act. Officials in five states expressed concerns about the higher cost of exchange plans compared with CHIP and the implications for families' finances. With regard to coverage, GAO previously reported that selected CHIP and state benchmark plans were generally similar in terms of their coverage of selected services and the services on which they imposed limits. However, officials from several of the 10 states pointed out that for many services needed by children with special health care needs, CHIP coverage was more comprehensive than exchange plans. With regard to access, several states raised concerns about negative implications for children's coverage if CHIP funding is not reauthorized, including concerns that their states would lose gains made in covering children, who would also lose access to providers and dental care. |
IRS is currently replacing its antiquated tax administration and financial systems. This effort, as we have reported numerous times, has suffered delays and cost overruns due to a number of reasons, including inadequate development and management of requirements. The IRS tax administration system, which collects approximately $2 trillion in annual revenues, is critically dependent on a collection of obsolete computer systems. Congress and IRS designed the BSM program to bring IRS tax administration systems to a level equivalent to private and public sector best practices, while managing the risks inherent in one of the largest, most visible, and sensitive modernization programs under way. Over the past 7 years, IRS has been appropriated approximately $1.9 billion for BSM (see fig. 1). BSM is critical to supporting IRS’s taxpayer service and enforcement goals. For example, BSM includes projects to allow taxpayers to file and retrieve information electronically and to help reduce the backlog of collection cases. It also provides IRS with the reliable and timely financial management information it routinely needs to account for the nation’s largest revenue stream. BSM has had some recent successes with its modernization efforts. During 2004, IRS implemented initial versions of (1) Modernized e-File (MeF), which provides electronic filing for large corporations and tax-exempt organizations; (2) e-Services, which created a Web portal and other electronic services to promote the goal of conducting most IRS transactions with taxpayers and tax practitioners electronically; (3) Customer Account Data Engine (CADE), which will replace the current system that contains the agency’s repository of taxpayer information; and (4) the Integrated Financial System, which replaced aspects of IRS’s core financial systems and is ultimately intended to operate as its new accounting system of record. However, despite these successes, IRS has had difficulty developing and managing requirements for its modernization efforts over the years. We reported in 1995 that IRS did not have the requisite software development capability to successfully complete a major modernization effort and that the success of modernization would depend on whether IRS would promptly address the weaknesses in several software development areas, including requirements management. In 1998, we assessed IRS’s systems life cycle document and reported a lack of sufficient information to document how business requirements were to be specified. More recently, in February and November of 2004, we reported in testimony and a report that cost overruns in various BSM projects, including CADE, MeF, and e-Services, were due in part to inadequate definition of requirements for their new systems, leading to incorporation of additional requirements late in the system’s life cycle and at a higher cost than if they had been included in the initial requirements baseline. We continue to highlight management control weaknesses such as these in our annual expenditure plan reviews. Other organizations that have assessed BSM projects have also found that IRS has not developed and managed requirements sufficiently on various projects. In 2001, the Treasury Inspector General for Tax Administration (TIGTA) reviewed key systems development practices of four BSM projects and reported that weaknesses in several process areas, including requirements management, were responsible for cost increases and schedule delays. TIGTA noted that these weaknesses raised the risk that systems would be developed that would not meet the needs of the businesses they were intended to support and recommended that BSM strengthen and/or implement aspects of these key systems development practices. In 2003, an independent technical assessment of CADE noted significant breakdowns in developing and managing requirements, which resulted in the inability of CADE to meet its original schedule. The assessment further stated that IRS focused primarily on the high-level business requirements and paid less attention to the development of specific, testable requirements developed later in the development life cycle and that responsibility for developing and managing the requirements was distributed among their various organizational component, instead of being concentrated in a centralized authority. BSM has acknowledged that it has weaknesses in developing and managing requirements; since 2004, requirements management has been one of its high-priority initiatives. To demonstrate their commitment to improving the development and management of requirements, it created an RMO in October 2004. This office is to address issues related to (1) lack of quality and completeness of modernization requirements, (2) lack of alignment of modernization requirements with business strategy and needs, (3) risks incurred by projects transitioning to development without a sufficient requirements baseline, and (4) lack of visibility into a fully traceable set of modernization requirements. During 2005, the RMO created a Concept of Operations that showed, at a high level, the RMO’s plans to address requirements practices, and, in November 2005, it obtained contractor support to develop new policies and procedures. According to the Software Engineering Institute’s (SEI) Capability Maturity Model Integration (CMMIsm), the requirements for a system describe the functionality needed to meet user needs and perform as intended in the operational environment. A disciplined process for developing and managing requirements can help reduce the risks of developing or acquiring a system. A well-defined and managed requirements baseline can, in addition, improve understanding among stakeholders and increase stakeholder buy-in and acceptance of the resulting system. The practices underlying requirements development and management include eliciting, documenting, verifying and validating, and managing the requirements through the systems life cycle (see fig. 2). This set of activities translates customer needs from statements of high-level business requirements into validated, testable systems requirements. The requirements development process starts with project teams eliciting, or gathering, requirements from stakeholders or participants involved in the project (e.g., customers and users). Since the usefulness of the system to its users and stakeholders is critically dependent on the accuracy and completeness of the requirements, all user groups and stakeholders should be identified and involved in defining requirements. In addition to gathering requirements from users and other stakeholders, analysis and/or research can be used to identify additional requirements that balance stakeholder needs against constraints and ensure that the requirements can be met in the proposed operational environment. After requirements have been elicited, they are analyzed in detail; documented as the business, or high-level, requirements; and agreed to by all stakeholders. Stakeholder agreement is an important part of this activity and is needed to demonstrate that the requirements accurately define intended uses. The business requirements should then be decomposed into detailed system requirements, which are analyzed to ensure that they can be implemented in the expected operational environment and that they can satisfy the objectives of higher-level requirements. The final requirements are approved by all stakeholders and documented as the requirements baseline. Once the baseline is established, it is placed under configuration management (CM) control. Once the requirements baseline has been developed, the requirements are analyzed and broken down into more specific system-level requirements and eventually into the code that makes up the system. The verification process ensures that the system-level requirements and the resulting code are an accurate representation of stakeholder needs. This process includes checking selected work products, such as software code, against the initial baseline requirements to ensure that the lower-level items fully satisfy the higher-level requirements. It is an inherently incremental process, occurring throughout the development of the product. This agreement between work products, such as code and baseline requirements, is verified by conducting peer reviews. Peer reviews can also be used to identify action items that need to be addressed. Without such reviews, an organization is taking a risk that substantial defects will not be detected until late in the development and/or testing phases, or even after the system is implemented. While the system is being developed, each component must be tested to ensure that its outputs are accurate. Testing (e.g., unit, system integration, and user acceptance) is the process of executing a program with the intent of finding errors. Clear, complete, and well-documented requirements are needed in order to design and implement an effective testing program. Linking the testing activities back to the requirements assures the organization that, once testing activities are successfully completed, all requirements have been addressed and will be met by the system. Without such assurance, it is possible for a requirement to be missed in development and the resulting lack of functionality not noticed until late in testing, or even after deployment. Requirements, once developed and approved, also need to be managed throughout the system life cycle. Two key areas of requirements management are addressing changes to requirements and establishing and maintaining bidirectional traceability from high-level requirements through detailed work products to test cases and scenarios. As mentioned earlier, once a set of high-level requirements is documented, verified, and approved, it is placed under configuration control. From this point, changes to the requirements are evaluated and validated as part of the change control process. Change control includes reviewing and assessing proposed changes to the requirements to determine the reasons for the changes, determining if these changes are occurring due to flaws in the requirements development process, and ensuring that any effects of the change on other requirements as well as on the cost, schedule, and performance goals of the project are determined and assessed. Establishing and maintaining traceability from initial requirements to work products and the resulting system is also important. A requirements traceability matrix demonstrates forward and backward (bidirectional) traceability from business requirements to detailed system requirements all the way through to test cases. BSM does not yet have adequate policies and procedures in place to guide its systems modernization projects in developing and managing requirements. In January 2006, the RMO developed a set of draft policies that address key areas of requirements development and management; these policies are to serve as interim guidance while the final policies and processes are being developed. At the conclusion of our review, the RMO provided us the draft policies and a high-level plan that includes milestones for completing these policies. Since critical BSM projects continue to be pursued and completion of the policies and procedures is not expected until March 2007, it is critical that BSM immediately implement the draft policies and continue to develop the final policies. BSM does not have comprehensive, detailed policies and procedures for requirements management and development activities that include requirements elicitation, documentation, verification and validation, and management. During our review, BSM officials were unable to provide us with detailed policies and procedures and agreed that they do not have such documents. Project teams were not consistent in their understanding of which guidance they should use for the development and management of requirements; some project team members mentioned BSM’s Enterprise Life Cycle (ELC); others said they were waiting for guidance from the RMO. Our review of the ELC showed that it did not provide the procedures project managers would need to properly perform the steps in the requirements development and management process. BSM program officials agreed that the ELC did not provide the needed guidance. In December 2005, the RMO completed an analysis of requirements development and management areas that need improvement. The RMO found, as we did, that BSM lacks detailed guidance; their recommendations included developing process handbooks for aspects of requirements elicitation, documentation, verification and validation, and management. Subsequently, in January 2006, BSM officials developed draft guidance that covers aspects of requirements development and management. However, this guidance does not fully address requirements elicitation, documentation, verification and validation, and management. At that time, BSM also provided us with a high-level plan that contains interim milestones and establishes a March 2007 completion date for the final set of policies and procedures. BSM officials told us that these draft policies are to serve as interim guidance while the remaining policies and procedures are being developed. In addition, IRS also uses its governance processes, particularly the milestone exit reviews, to find and mitigate issues and problems with requirements development and management on existing projects. Finally, the RMO is allocating resources to key projects—such as F&PC version 1.2—to assist them in developing requirements. Without a formal set of documents that detail organizational policies and associated procedures, employees and contractors will rely on their individual knowledge and expertise to complete requirements development and management activities. This raises the risk of cost overruns, schedule delays, and reduction of functionality. Since critical BSM projects are already under way, and completion of the policies and procedures is not set until March 2007, it is urgent that BSM immediately implement the draft policies. Until BSM develops and implements policies and procedures that fully address the key areas of requirements development and management, including elicitation, documentation, tracking of cost and schedule impacts associated with requirements changes, and establishing and maintaining full bidirectional traceability, ongoing projects will continue to run greater risk of cost and schedule overruns and poor system performance. As a result of the lack of policies and procedures, BSM projects varied in the extent to which they followed disciplined requirements practices. All three projects we reviewed—MeF release 3.2 (to be deployed March 2006), and F&PC release 1.1 (deployed January 2006), and CADE release 1.1 (deployed July 2004)—performed some of the practices associated with sound requirements development and management. For example, all three projects had a change management process in place that requires approvals and impact assessments to be completed when changes are made to requirements. However, none of the three BSM project releases we reviewed consistently performed all of the practices needed for effective requirements management. Specifically: Project teams did not have a clear, documented, and consistent method of eliciting requirements. Project teams did not adequately document all requirements. Project teams did not effectively verify requirements. Project teams did not demonstrate adequate management of requirements. Based on stakeholder information such as customer expectations, constraints, and interfaces for a system, the requirements elicitation team discovers, defines, refines, and documents business-level requirements. Due to the importance of this activity, plans or strategies should be in place to guide project officials in defining elicitation-related activities and in outlining how the requirements will be gathered (e.g., interviewing the users or analyzing the current or expected business processes). BSM project teams did not have a clear, consistent, and documented method of eliciting requirements for the projects. For example, although the teams identified stakeholders in their project plans, only MeF 3.2 provided evidence that working group meetings were conducted with stakeholders to understand their needs and to identify their problems and expectations and that strategies or plans were developed for eliciting requirements. CADE 1.1 project team members could not describe how they elicited requirements or provide a requirements plan that documented elicitation procedures or strategies. An F&PC project team member stated that, for release 1.1, the project did not have a fully documented process for elicitation; however, the team member stated that the team had held workshops and obtained resources and assistance from the RMO to help mitigate the lack of a process. The RMO used lessons learned from this effort to develop a new requirements elicitation process, which they expect will assist F&PC in elicitation for its next release. BSM project and program officials agreed that requirements elicitation processes could be improved and stated that they were planning to address some of the problems we found. For example, when we asked project officials about the policies and procedures underlying their current requirements elicitation activities, some stated that they were waiting for new policies to be issued by the RMO, and others cited the ELC as guidance. As mentioned earlier, the ELC does not provide the information needed for the requirements elicitation process. Furthermore, F&PC officials could not state which sections in the ELC described the requirements elicitation process. BSM program management and RMO officials acknowledged the lack of policies and procedures and stated that the RMO has since developed new guidance for eliciting requirements that will be piloted on F&PC version 1.2, which is currently entering the requirements development phase. BSM project teams performed elicitation processes in a nonstandard manner due to the lack of policies, procedures, and guidance. Without standardized policies and procedures to guide this key part of requirements development, BSM program officials cannot ensure that its systems development projects have collected and documented all the necessary requirements, which could result in systems being developed that do not meet user needs. After collecting and documenting high-level requirements from customers, users, and other stakeholders, the requirements team should analyze these high-level requirements against the conceptual (or expected) operational environment to balance user needs and constraints and to ensure that the system developed will perform as intended. The resulting lower-level requirements should also be analyzed to make sure they can be performed in the expected environment and that they satisfy the objectives of the higher-level requirements. The final requirements are documented in the requirements baseline. The BSM projects we reviewed did not complete all of the activities needed to adequately document requirements. Although project teams provided evidence that they created a set of high-level requirements and obtained approvals from stakeholders on this set of requirements, two of the three projects did not provide evidence that requirements were thoroughly analyzed and decomposed to lower-level system requirements. For example, part of this analysis would link all lower-level systems requirements to the original higher-level business requirements. Only one of the project teams—F&PC—provided documentation showing the necessary linking or mapping of lower-level system requirements to the business requirements. MeF and CADE provided documentation; however, their documentation did not fully demonstrate the linking of system-level requirements to the business requirements. A MeF project official agreed that full linkage of system-level requirements to business requirements should be implemented. The MeF official stated that they planned to implement this in their next version—version 4.0. In addition, a BSM program official indicated that additional project guidance on requirements documentation would be part of the RMO’s deliverables and would help to address this issue. Without feasible and clearly defined requirements, projects run the risk of cost overruns, schedule delays, and deployment of systems with limited functionality. For example, incomplete definition of requirements has been cited as one reason for schedule delays and cost overruns for both CADE and MeF. Once requirements are fully documented, software code and other work products that will guide development and testing activities need to be verified using peer review techniques against the original requirements. In addition, these products should be validated through testing to ensure that they will operate effectively in the intended environment. Requirements verification ensures that the lower-level requirements, software code, and other work products that will guide development and testing activities are an accurate representation of stakeholder needs. Peer reviews are an important part of the verification process and are a proven mechanism for effective defect removal. During peer reviews, teams of peers examine code and other work products to identify defects, determine the causes of the defects, and make recommendations that address changes needed to help ensure that the system will meet stakeholder and developer needs. Peer reviews should follow a structured, formalized process; peer review events should be planned in advance, with items, such as code and other work products, selected in advance; the results of the sessions should be incorporated into peer review reports that project teams are expected to address before moving further into development activities. The BSM project teams did not provide evidence that work products had been verified against requirements through the use of a formalized peer review process and project officials did not follow recommended practices for conducting peer reviews. BSM project team members stated that they conducted customer technical reviews and milestone exit reviews that they considered to be peer reviews; however, these kinds of reviews do not meet the criteria for peer reviews. They were not structured, did not select code and other items in advance to be evaluated, and did not produce formal peer reports with action items that projects were required to address. Requirements validation is the process of demonstrating that a product fulfills its intended use in its environment. It differs from the verification activities previously described, in that validation determines that the product will fulfill its intended use, while verification ensures that work products properly reflect the baseline requirements. Validation includes tests conducted on the product during development to prove that the product performs its intended functions correctly. In a disciplined software development process, planning for validation activities begins as requirements are developed; testing activities are critical to determining that a system not only operates effectively but addresses all baseline requirements. To complete validation activities, testing is conducted at several levels, each of which validates that the system will operate effectively at a different level. For example, unit testing validates individual sections of code, and system integration testing ensures that the system as a whole can operate effectively in its environment. User acceptance testing allows the user community to determine whether the system, as developed, can be used to effectively support their work. It also validates that the system meets user expectations. An effective testing process confirms the functionality and performance of the product prior to delivery. It is a crucial process and needs to be well planned, well structured, well documented, and carried out in a controlled and managed way. The BSM projects provided evidence of validation activities, such as test plans and test results. CADE 1.1 officials provided both test plans and test reports. MeF release 3.2 and F&PC release 1.1 are still in the testing phase; they provided available test plans but do not yet have test reports. Despite the existence of test plans and reports, requirements are not fully documented or fully traced. In addition, while the ELC provides guidance on testing that discusses test planning, activities, and test responsibilities, program officials say that this guidance is limited. BSM’s Enterprise Services organization has initiated an effort to review, revise, and enhance test procedures across BSM. Therefore, until BSM ensures full documentation and traceability of requirements, questions about the completeness of its testing will remain. Finally, requirements must be managed through the system development life cycle. We found that the three projects did not fully demonstrate adequate management of their requirements. Although the projects had a formal change control process in place to analyze and manage changes to requirements, associated costs and schedule changes resulting from requirements changes were not always tracked or updated. In addition, projects’ documentation did not demonstrate adequate traceability of requirements from the business requirements (high-level requirements) to system requirements (low-level requirements) to test cases. Managing changes to the original requirements is a formal process to identify, evaluate, track, report, and approve these changes. As work products are developed and more is learned about the system that is being developed, information is occasionally found that requires a change to the original requirements. Modifications to project scope or design can also result in requirements changes. Therefore, projects need to manage these changes to requirements in a structured way. The BSM project teams used a change management process to manage changes to requirements that included documenting the rationale for changes, developing assessments of the impact of the change, and obtaining approvals by the Configuration Change Board. However, only the MeF and F&PC teams provided evidence that their cost and schedule baselines were updated when changes to requirements impacted cost and/or schedule. For example, CADE officials did not provide any evidence to show how they updated and tracked cost changes resulting from changes to requirements, nor did they provide evidence that the work breakdown structure was updated to reflect schedule changes. F&PC officials provided evidence for tracking changes to the cost and schedule. MeF officials provided a document that tracked the cost implications of changes to requirements and the work breakdown structure to reflect schedule changes. A BSM project official indicated that the BSM project was implementing cost and schedule tracking on its current releases. However, it was not clear whether BSM was doing this consistently or whether appropriate guidance for tracking cost and schedule would be provided by BSM. Project teams that do not effectively track cost and schedule changes as a result of changes to requirements will not be able to effectively mitigate the potential impact of these changes to overall cost, schedule, and performance goals, thus raising the risk of cost overruns, schedule delays, and deferral of functionality. Another key element of requirements management is establishing and maintaining the traceability of requirements. Traceability of requirements is tracking the requirements from the inception of the project and agreement on a specific set of business requirements to development of the lower-level system requirements, detailed design, code implementation, and test cases necessary for validating the requirements. Tracing a requirement throughout the development cycle provides evidence that the requirements are met in the developed system and ensures that the product or system will work as intended. Requirements must be traceable forward and backward through the life cycle. Each business requirement must be traceable to associated system requirements and test cases. Without adequate traceability, errors in functionality could occur and not be found until the testing phase, when problems are more costly to fix and time frames for fixing problems without causing a schedule slip for deployment are limited. Of the three projects, only the F&PC team showed evidence of full traceability of the requirements from high-level requirements to low-level requirements. MeF and CADE documentation did not demonstrate clear traceability from the business requirements to lower-level system requirements, coding, and test cases. MeF program officials acknowledged weaknesses in this area and stated that they planned to develop full bidirectional traceability to the business level requirements as part of MeF release 4.0. According to project officials, one reason they do not have full bidirectional traceability is due to the lack of detailed procedures and guidance for traceability of requirements. Until recently, BSM projects were not required to develop and use a traceability matrix. While interim guidance issued by BSM does require the use of traceability matrices and use of its configuration management repository to manage requirements, the guidance lacks the detail needed to ensure that projects meet criteria. BSM program officials agreed that this was an area that needed additional guidance. The RMO is currently reviewing new guidance on how to improve requirements traceability. Without adequate traceability of requirements, system requirements can be missed during development and the agency cannot be assured that validation activities fully demonstrate that all the agreed-upon requirements have been developed, fully tested, and will work as intended. BSM lacks policies and procedures to develop and manage requirements for their systems modernization projects. BSM has acknowledged this deficiency since late 2004 when it listed requirements management as one of its high-priority initiatives and created an RMO. The office has now developed draft policies that cover aspects of eliciting, documenting, verifying and validating, and managing requirements. These draft policies are to serve as guidance to projects teams as BSM projects are pursued. It is critical that BSM implement these draft policies immediately and continue to develop the remaining policies. The three BSM development projects that we reviewed showed significant differences in how they implemented practices for developing and managing requirements. Until BSM and the RMO complete the development of policies and procedures to ensure disciplined requirements development and management practices, projects will not have sufficient guidance to ensure implementation of these practices, which will impair their ability to effectively manage the development and acquisition of critical systems and increase the risk of cost overruns, schedule delays, and deferral of functionality. To improve the requirements development and management policies and practices of the IRS’s BSM, we recommend that the Commissioner of Internal Revenue direct the Associate Chief Information Officer for BSM to take the following two actions: 1. Ensure that BSM completes the delivery of policies and procedures for requirements development and management as planned. The policies and procedures should fully describe the processes, include a minimum set of activities required for each project, and provide detailed procedures for each of the key areas of requirements elicitation, documentation, verification and validation, and management. As part of this effort, the policies and procedures should specifically include the following: A standardized process for the elicitation of requirements that ensures that projects fully investigate the requirements needed for a specific system, including gathering requirements from all relevant users and stakeholders. A standardized process for the documentation of requirements that ensures full documentation of the baseline requirements. A process for ensuring formal peer reviews are planned and completed for key products. Guidance on tracking cost and schedule impacts of changes to requirements for all projects. Guidance on establishing and maintaining full bidirectional requirements traceability. 2. In addition, since BSM has ongoing projects that are developing and managing requirements and the development of new policies and procedures is not scheduled to be complete until March 2007, the Commissioner should direct the Associate CIO for BSM to immediately implement its draft policies while the final policies and procedures are being developed. In providing written comments on a draft to this report, the Commissioner of Internal Revenue agreed with our findings and stated that the report provided a sound and balanced representation of the progress IRS has made to date as well as work that remains to be completed. The Commissioner also described the actions that IRS is taking to implement our recommendations, including establishing a schedule to complete the development of policies that address the areas of requirements elicitation, documentation, verification and validation, and management. The Commissioner’s written comments are reprinted in appendix III. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We are also sending copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, the Chairman of the BSM Oversight Board, and the Director of the Office of Management and Budget. Copies are also available at no charge on the GAO Web site at http://www.gao.gov. Should you or your offices have questions on matters discussed in this report, please contact David A. Powner at (202) 512-9286 or pownerd@gao.gov or Keith A. Rhodes at (202) 512-6412 or rhodesk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of our review were to assess (1) whether the Requirements Management Office (RMO) has established adequate requirements development and management policies and procedures and (2) whether the Business Systems Modernization (BSM) has effectively used requirements development and management practices for key systems development efforts. To assess the adequacy of BSM’s requirements development and management policies and procedures, including IRS’s Enterprise Life Cycle (ELC), we compared it against criteria based on industry standards and best practices, including the Software Engineering Institute’s (SEI) Capability Maturity Model Integration (CMMIsm) version 1.1. We also reviewed draft policies and procedures provided by the RMO in February 2006 and compared them against this criteria. In addition, we interviewed appropriate BSM officials to discuss the creation and goals of the RMO and whether there were BSM requirements development and management policies and procedures in place. To assess whether BSM project teams effectively used requirements development and management practices on its systems acquisitions, we selected three BSM projects to review: (1) Modernized e-File (MeF) release 3.2, which is to be deployed in March 2006; (2) Filing and Payment Compliance (F&PC) release 1.1, which was deployed in January 2006; and Customer Account Data Engine (CADE) Individual Master File release 1.1, which was deployed July 2004. We selected these investments because they were (1) important to the goals and mission of the agency, (2) large-scale development efforts with significant costs, and (3) at different points in their development life cycles. To evaluate whether each of the three projects had effectively used requirements development and management practices for key systems development efforts, we compared the project’s documentation and processes against criteria based on industry standards and best practices, including SEI’s CMMIsm version 1.1. The documentation reviewed for each of the projects included requirements management plans, traceability matrices, testing plans, baseline requirements, and other items. We also interviewed the program officials from each of these three projects to further clarify issues on their requirements development and management activities. Our work was performed from June 2005 to February 2006 in Washington D.C., in accordance with generally accepted government auditing standards. The following are descriptions of the three projects we selected to review: Modernized e-File (MeF) release 3.2, Filing and Payment Compliance (F&PC) release 1.1, and Customer Account Data Engine (CADE) release 1.1. In fiscal year 2004, BSM introduced the Modernized e-File (MeF) system, which allows e-filing for tax-exempt organizations and large corporations and reduces the time to process their tax forms. The goal for MeF is to replace the current e-filing technology with a modernized, Internet-based electronic filing system. MeF is also expected to result in an increase in the use of electronic filing because it is efficient and easy to access, use, and maintain. Projected benefits of the MeF program are as follows: Reduction in the BSM’s effort associated with receiving, processing, manually entering data, and resolving data entry errors from paper returns; Reduction in system maintenance costs; Savings in time and money for taxpayers and tax practitioners due to not copying, assembling, and mailing a return; and Sharing of tax and information return data electronically throughout state agencies. The MeF project is projected to provide the capability for Internet-based filing of 330 different BSM forms. The following is table 2, describing MeF releases deployed and their functionalities, followed by table 3, which describes MeF financial data. The Filing and Payment Compliance (F&PC) project is intended to improve technologies and processes that support BSM’s compliance activities. According to BSM, their collection operations rely on 30-year-old technology and processes that are no longer compatible with the realities of today’s taxpayer environment. F&PC plans to provide support for detecting, scoring, and working nonfiler and payment delinquency cases. It is to use advanced software to analyze tax collection cases and divide them into cases that require BSM involvement and those that can be handled by private collection agencies. Case attributes are to be identified, segmented, and prioritized to select the individual taxpayer cases that have a greater probability of paying the tax liabilities in full or through installment agreements. The F&PC project is also to serve as an inventory management system to assign, exchange, monitor, control, and update delinquent taxpayer accounts between the BSM Authoritative Data Source and the private collection agencies with whom BSM will contract. The F&PC project is expected to increase the following: collection case closures by 10 million annually by 2014, voluntary taxpayer compliance, and BSM’s capacity to resolve the buildup of delinquent taxpayer cases. The BSM intends to deliver an initial limited private debt collection capability in January 2006. Full implementation of this aspect of the F&PC is projected to be completed by January 2008 with additional functionality to follow in later years. Following is table 4, describing F&PC releases deployed and their functionality, followed by table 5, which describes F&PC financial data. The Customer Account Data Engine (CADE), intended to replace BSM’s antiquated tax administration system, is BSM’s highest priority project and is intended to house tax information for more than 200 million individual and business taxpayers. The CADE databases and related applications are also to enable the implementation of other systems that will improve customer service and compliance and allow the online posting and updating of taxpayer account and return data. The CADE project is intended to generate refund notices, detect potential fraudulent transactions, and replace the group of BSM tax master files with a single database—the Tax Account Data Store; accept, validate, and store taxpayer return and account data, along with financial account activity data, such as tax payments, liabilities, and installment agreements; and enable future business application systems. In July 2004 and January 2005, BSM implemented the initial releases of CADE, which have been used to process Form 1040EZ returns. CADE posted more than 1.4 million returns for filing season 2005 and generated more than $427 million in refunds. In 2006, CADE is expected to expand the number and type of returns beyond the Form 1040EZ. BSM is also projecting that CADE will process 33 million returns during 2007. Following is table 6, describing CADE releases deployed and their functionality, followed by table 7 describing CADE financial data. In addition to those named above, Neil Doherty, Nancy Glover, George Kovachick, Tonia Johnson, Tammi Nguyen, Madhav Panwar, and Rona Stillman made key contributions to this report. | The Internal Revenue Service's (IRS) effort to modernize its tax administrative and financial systems--Business Systems Modernization (BSM)--has suffered delays and cost overruns due to a number of factors, including inadequate development and management of requirements. Recognizing these deficiencies, IRS created a Requirements Management Office (RMO) to establish policies and procedures for managing requirements. GAO's objectives were to assess (1) whether the office has established adequate requirements development and management policies and procedures and (2) whether BSM has effectively used requirements development and management practices for key systems development efforts. BSM does not yet have adequate policies and procedures in place to guide its systems modernization projects in developing and managing requirements. In January 2006, the RMO developed a set of draft policies that address some key areas of requirements development and management; these policies are to serve as interim guidance while the final policies and processes are being developed. At the conclusion of GAO's review, the RMO also provided a high-level plan that includes milestones for completing these policies. Since critical BSM projects continue to be pursued and completion of the policies and procedures is not expected until March 2007, it is critical that BSM immediately implement the draft policies and continue to develop the final policies. As a result of the lack of policies and procedures, the one ongoing project--Modernized e-File (MeF)--and the two completed projects--Filing and Payment Compliance (F&PC) and Customer Account Data Engine (CADE)--GAO reviewed did not consistently follow disciplined practices for systems development and management. For example, all three projects had a key element of managing requirements--a change management process that requires approvals and impact assessments to be completed when there are changes to requirements--but none met all of the practices needed for effective requirements management. In addition, two projects did not have a clear, consistent way to elicit (gather) requirements, two did not have fully documented requirements, and two could not produce fully traceable requirements (i.e., the requirements could not be tracked through development and testing), which is another key element of managing requirements. Unless IRS takes the steps needed to develop and institutionalize disciplined requirements development and management processes and implements draft policies in the interim to cover key areas of requirements development and management, it will continue to face risks, including cost overruns, schedule delays, and performance shortfalls. |
A working capital fund relies on sales revenue rather than direct appropriations to finance its continuing operations. A working capital fund is intended to (1) generate sufficient resources to cover the full costs of its operations and (2) operate on a break-even basis over time—that is, neither make a gain nor incur a loss. Customers use appropriated funds, primarily Operation and Maintenance appropriations, to finance orders placed with the working capital fund. DOD estimates that in fiscal year 2006, the Defense Working Capital Fund—which consists of the Army, Navy, Air Force, Defense-wide, and Defense Commissary Agency working capital funds—will have revenue of about $105 billion. The Defense Working Capital Fund finances the operations of three fundamentally different types of support organizations: (1) stock fund activities, which provide spare parts and other items to military units and other customers; (2) industrial activities, which provide depot maintenance, research and development, ordnance, and other services to their customers; and (3) other service activities, which provide various services such as accounting (Defense Finance and Accounting Service) and computer services (Defense Information Systems Agency). Because carryover is primarily associated with industrial operations, this report discusses the results of our review of Defense Working Capital Fund industrial operations. Carryover is the dollar value of work that has been ordered and funded (obligated) by customers but not completed by working capital fund activities at the end of the fiscal year. Carryover consists of both the unfinished portion of work started but not completed, as well as requested work that has not yet commenced. Some carryover is necessary at fiscal year end if working capital funds are to operate efficiently and effectively. For example, if customers do not receive new appropriations at the beginning of the fiscal year, carryover is necessary to ensure that the working capital fund activities have enough work to ensure a smooth transition between fiscal years. Too little carryover could result in some personnel not having work to perform at the beginning of the fiscal year. On the other hand, too much carryover could result in an activity group receiving funds from customers in one fiscal year but not performing the work until well into the next fiscal year or subsequent years. By minimizing the amount of carryover, DOD can use its resources in the most effective manner and minimize the “banking” of funds for work and programs to be performed in subsequent years. In 1996, DOD established a 3-month carryover standard for all working capital fund activities except for the contract portion of the Air Force depot maintenance activity group. In May 2001, we reported that DOD did not have a basis for its carryover standard and recommended that DOD determine the appropriate carryover standard for the depot maintenance, ordnance, and research and development activity groups. According to Office of the Under Secretary of Defense (Comptroller) officials, DOD provided verbal guidance concerning its new carryover policy for working capital fund activities in December 2002. Subsequently, DOD included its revised carryover policy in its DOD Financial Management Regulation 7000.14-R, Volume 2B, Chapter 9, dated June 2004, which eliminated the 3- month standard for allowable carryover. Under the new policy, the allowable amount of carryover is to be based on the outlay rate of the customers’ appropriations financing the work. This meant that in determining allowable carryover, the first year outlay rate would be used for new orders received in the current year (first year of the work order). According to the DOD regulation, this new metric allows for an analytical- based approach that holds working capital fund activities to the same standard as general fund execution and allows for more meaningful budget execution analysis. Further, based on our work on Army depot maintenance operations, we recommended in our June 2005 report that DOD clarify its written guidance for calculating the actual amount of carryover as well as the allowable amount of carryover. On June 29, 2005, DOD issued clarifying guidance on carryover. The guidance specified that (1) the actual amount of carryover associated with current and prior year orders is required to be the amount reported to Congress and within DOD, (2) the allowable amount of carryover is to be calculated based on current year customer orders received and the first year outlay rate for the appropriations financing those orders for all activity groups except shipyards, and (3) shipyards are authorized to use 2 years of customer orders in the calculation of the allowable amount of carryover and to use the first and second year outlay rates for the appropriations financing those orders. The military services have not consistently implemented DOD’s 2002 revised policy in calculating carryover. Instead, the military services used different methodologies for calculating the reported actual and the allowable amount of carryover since DOD changed its carryover policy in December 2002. Specifically, (1) the military services did not consistently calculate the allowable amount of carryover that was reported in their fiscal year 2004, 2005, and 2006 budgets because they used different tables (both provided by DOD) that contained different outlay rates for the same appropriation; (2) the Air Force did not follow DOD’s regulation on calculating carryover, which affected the amount of allowable carryover and actual carryover by tens of millions of dollars and whether the actual amount of carryover exceeded the allowable amount as reported in the fiscal year 2004, 2005, and 2006 budgets; and (3) the Army depot maintenance and ordnance activity groups’ actual carryover was understated in fiscal years 2002 and 2003 because carryover associated with prior year orders was not included in the carryover calculation as required. Further, while the Navy generally followed DOD’s policy for calculating carryover, the Navy consolidated the reporting of carryover information for research and development activities. As a result, the Navy budgets no longer provide information to show if any of the five research and development subactivity groups individually exceed the carryover ceiling. This information had been provided in the Navy budgets prior to the change in the carryover policy in December 2002. The primary factor for these inconsistencies is that DOD’s December 2002 guidance was verbal and DOD did not issue detailed written procedures for calculating carryover and the allowable amount of carryover until June 2004. Afterwards, DOD issued clarifying written guidance in June 2005, January 2006, and February 2006. As a result, year-end carryover data provided to decision makers who review and use these data for budgeting—Office of the Under Secretary of Defense (Comptroller) and congressional decision makers—are erroneous and not comparable across the three military services. The military services used different outlay rate tables that provided different outlay rates for the same appropriation when calculating the allowable amount of carryover. The outlay rate tables came from two sources—the Office of the Under Secretary of Defense (Comptroller), Revolving Funds Directorate, and the Financial Summary Tables published by Office of the Under Secretary of Defense (Comptroller), Directorate for Program and Financial Control. Because the outlay rates in these documents sometimes differ, this could affect whether an activity group exceeded the carryover ceiling or not. Under the new carryover policy, the allowable amount of carryover is to be based on the outlay rates of the customers’ appropriations financing the work. In implementing this policy, it is important for the services to use the same outlay rate tables so that their calculations on the allowable amount of carryover are consistent. However, when DOD changed the carryover policy in December 2002, DOD did not instruct the services, in writing, on which outlay rate tables should be used to calculate the allowable amount of carryover. Table 1 shows which outlay rate source each of the military services used. Table 2 shows the differences between the outlay rates for selected appropriations in the tables provided by the Office of the Under Secretary of Defense (Comptroller) and the DOD Financial Summary Tables that were used to calculate the allowable amount of carryover, which is included in the fiscal year 2005 budget. Some of the differences are large while others are small. These outlay rates, along with the amount of appropriations financing orders, are used to determine the allowable carryover (ceiling). Because the dollar amount of appropriations financing orders is sometimes in the hundreds of millions of dollars, even a small rate difference could result in significantly more or less allowable carryover. For example, the Navy estimated that the naval aviation depots would receive $694 million for new Operation and Maintenance, Navy orders in fiscal year 2005. Using the outlay rate provided in the DOD Financial Summary Tables, the Navy would be allowed to carry over about $146 million. In contrast, using the outlay rate table provided by the Office of the Under Secretary of Defense (Comptroller), the Navy would be allowed to carry over $180 million—about a $34 million difference for just this one appropriation financing orders received by the naval aviation depots. In addition to using different outlay rate tables, there appeared to be uncertainties regarding which year’s outlay rates to use. For the fiscal year 2006 budget, the Army and Navy used the DOD Financial Summary Tables to determine the appropriation outlay rates used in calculating the allowable amount of carryover. These tables contain different appropriation outlay rates for each fiscal year. Military service officials stated that DOD had not provided any written guidance on whether the services should use the fiscal year 2004 or 2005 outlay rates or both when determining the allowable amount of carryover in preparing the fiscal year 2006 budget. An excerpt of the outlay rates from the DOD Financial Summary Tables dated February 2004 follows. The Navy used the fiscal year 2005 outlay rates for calculating the allowable amount of carryover for fiscal years 2004, 2005, 2006, and 2007— the fiscal years that are included in the fiscal year 2006 budget. The Army used the same document but instead used the fiscal year 2004 outlay rates for calculating the allowable carryover for fiscal year 2004. The Army used the fiscal year 2005 rates for calculating the allowable carryover for fiscal years 2005, 2006, and 2007. While this might appear to be a small matter because the rates are generally the same or almost the same from one fiscal year to the next, using the different rates (2004 versus 2005) for calculating the allowable carryover for the Army industrial operations activity group in fiscal year 2004 results in a different outcome. Based on its calculations, the Army reported that its actual carryover was $141 million below the ceiling for fiscal year 2004. However, using the fiscal year 2005 rates (the rates that the Navy used) would show the Army exceeded the ceiling by about $275 million—a swing of $416 million. This difference is attributable to the outlay rate for the Operation and Maintenance, Army appropriation being 52.03 percent for fiscal year 2004 but 68.8 percent for fiscal year 2005. According to Army officials, the outlay rate varied significantly for these 2 fiscal years because of the supplemental appropriations received during fiscal year 2004. Based on our work involving the Army depot maintenance activity group, we recommended in our June 2005 report that DOD clarify its written guidance for calculating the actual amount of carryover as well as the allowable amount of carryover. DOD concurred with our recommendations and on June 29, 2005, DOD issued clarifying guidance on carryover. Among other things, the guidance specified that (1) the allowable amount of carryover is to be calculated based on current year customer orders received and the first year outlay rates for the appropriations financing those orders for all activity groups except shipyards and (2) the outlay rates are to be based on historic outlay rates in the DOD Financial Summary Tables. DOD’s guidance clarifies which source document should be used to identify the outlay rates. However, it does not address the question of which fiscal year or years that are contained in the DOD Financial Summary Tables are to be used. During our current review, we informed Office of the Under Secretary of Defense (Comptroller) officials that the services did not always comply with DOD’s policy on calculating the allowable amount of carryover. Specifically, the services (1) did not always use the correct outlay rate tables in determining the amount of allowable carryover and (2) used different outlay rates contained in the DOD Financial Summary Tables for calculating the allowable amount of carryover for specific fiscal years. In responding to our discussions, DOD took two actions. First, DOD included carryover guidance in its January 17, 2006, memorandum on the fiscal year 2007 budget justification book material for Congress. This guidance specifies that the services are to use the fiscal year 2006 DOD Financial Summary Tables to calculate carryover. The guidance further specifies that the services must use the rates in the DOD Financial Summary Tables unless a waiver is approved in writing by the Office of the Under Secretary of Defense (Comptroller), Director for Revolving Funds. Second, in February 2006, the Office of the Under Secretary of Defense (Comptroller) provided additional guidance to the services for the fiscal year 2007 budget specifying that the (1) fiscal year 2005 outlay rates in the DOD Financial Summary Tables will be used for calculating the allowable amount of carryover for fiscal year 2005 and (2) fiscal year 2006 outlay rates in the DOD Financial Summary Tables will be used for calculating the allowable amount of carryover for fiscal years 2006 and 2007. In reviewing the Air Force carryover figures shown in the fiscal year 2004, 2005, and 2006 budgets to Congress, we found a number of problems with how the Air Force calculated the reported actual as well as the allowable amount of carryover for the depot maintenance activity group. These problems significantly affected the determination of allowable carryover and whether the Air Force depot maintenance activity group exceeded that ceiling. With one exception, the Air Force took action and corrected the problems when preparing the fiscal year 2007 budget. These problems are discussed below. The Air Force used the fiscal year 2001 outlay rates provided by the Office of the Under Secretary of Defense (Comptroller) to determine the allowable amount of carryover in the fiscal year 2004 budget. This was the appropriate outlay rate table to use for that budget. However, even though the Office of the Under Secretary of Defense (Comptroller) provided updated outlay rates for the next fiscal year, the Air Force did not use the updated outlay rates when calculating its allowable carryover in the fiscal year 2005 budget. Instead, the Air Force continued to use the fiscal year 2001 outlay rates. Moreover, the Air Force continued to use the fiscal year 2001 outlay rates to calculate the allowable carryover in the fiscal year 2006 budget instead of using the updated outlay rates published by DOD. The Air Force used all orders received (both prior year and current year orders) in calculating the allowable amount of carryover in the fiscal year 2004, 2005, and 2006 budgets. For example, in calculating the allowable carryover for fiscal year 2004, the Air Force included about $1.8 billion of prior year orders in the calculation. DOD carryover policy states that only current year orders should be used in determining the allowable carryover. The Air Force method of including all orders allowed too much carryover. The Air Force excluded orders received from the U.S. Transportation Command when calculating the amount of actual carryover in the fiscal year 2004, 2005, and 2006 budgets. DOD Financial Management Regulation 7000.14-R, Volume 2B, Chapter 9, permits excluding some orders financed with non-DOD funds, such as orders received from foreign countries, but does not permit excluding U.S. Transportation Command orders. For example, because the Air Force excluded about $214 million of U.S. Transportation Command orders when calculating its actual carryover for fiscal year 2004, its carryover was understated. The Air Force’s fiscal year 2006 budget to Congress expressed carryover in equivalent months of work (this is the old method of reporting carryover) rather than in terms of the allowable and actual carryover dollar amounts as required by DOD Financial Management Regulation 7000.14-R, Volume 2B, Chapter 9. The problems cited above had a significant impact on the amount of allowable carryover and actual carryover and whether the actual carryover exceeded the allowable amount, as shown in table 4. In discussing the carryover calculations with Air Force officials, they agreed that they were not calculating either the allowable amount of carryover or the actual amount of carryover correctly in the fiscal year 2004, 2005, and 2006 budgets. They informed us that in preparing the carryover information contained in the fiscal year 2004 budget, DOD budget analysts who review the budget information, including the carryover information, did not raise questions with the Air Force carryover calculations. Accordingly, they continued to use the same methodology for calculating the allowable carryover and actual carryover that was included in the fiscal year 2005 and 2006 budgets to Congress. Based on our discussions with them, these officials informed us that the Air Force would be developing the carryover figures that will be used in the fiscal year 2007 budget in accordance with DOD policy. In reviewing the Air Force fiscal year 2005 carryover calculations included in the fiscal year 2007 budget, we determined that the Air Force was complying with DOD’s carryover policy with one exception. For orders financed with the Air Force working capital fund supply account, the Air Force used a 61 percent outlay rate to calculate its allowable carryover instead of the 73.5 percent outlay rate for the Air Force operation and maintenance appropriation contained in the DOD Financial Summary Tables and required by the Office of the Under Secretary of Defense (Comptroller), Revolving Fund Directorate. Using the 61 percent figure, the Air Force reported that its actual carryover for fiscal year 2005 was about $193 million under the carryover ceiling. However, if the Air Force used the 73.5 percent outlay rate, our analysis show that the fiscal year 2005 actual carryover would have exceeded the carryover ceiling by about $148 million. In discussing the outlay rate difference with the Air Force, officials stated that they used the 61 percent figure because the rate was more consistent with the actual outlay rate of the Air Force working capital fund supply account. However, the Air Force could not provide us with documentation supporting how they arrived at the 61 percent figure. On February 7, 2006, the Air Force requested from the Office of the Under Secretary of Defense (Comptroller) that it be allowed to use the 61 percent figure in developing the carryover ceilings contained in the fiscal year 2007 budget. The Office of the Under Secretary of Defense (Comptroller) approved the Air Force’s request on March 6, 2006. In June 2005, we reported that the Army understated the reported actual carryover for the depot maintenance activity group for fiscal years 2002 and 2003 because it interpreted DOD’s 2002 carryover guidance as requiring only the inclusion of customer orders received in the current year when calculating actual carryover. During this review, we found this same problem affected the reported actual carryover for the ordnance activity group. Thus, the Army did not include customer orders received in prior years and the carryover related to those orders. The Army corrected this problem and included all carryover when it prepared its fiscal year 2006 budget. Table 5 provides information on the actual amount of carryover reported to Congress for fiscal years 2002 and 2003 and the amount of carryover not included. Army officials at headquarters acknowledged that the reported actual carryover did not include carryover related to prior year orders. Although DOD changed its carryover policy in December 2002, it did not issue detailed written procedures for calculating actual carryover until June 2004. Army headquarters officials stated that prior to the issuance of the written guidance in June 2004, the new carryover calculation was based on verbal instructions that the Army received from the Office of the Under Secretary of Defense (Comptroller). The Army said they interpreted the new guidance to include only actual carryover on orders received in the current year and instructed the Army Materiel Command to calculate carryover accordingly. When DOD issued the revised DOD regulation in June 2004, Army officials said they realized that they were not calculating reported actual carryover correctly and changed their methodology in developing the fiscal year 2006 budget so that the actual carryover calculation would include prior year orders and be in accordance with DOD’s written guidance. In analyzing the Navy’s actual carryover figures for the naval shipyards, aviation depots, and research and development activity groups shown in the fiscal year 2004, 2005, 2006, and 2007 budgets to Congress, we found that the Navy generally followed DOD’s policy on calculating the actual amount of carryover as well as the allowable amount of carryover. Our analysis of the Navy budgets submitted to Congress shows that the naval aviation depots have consistently exceeded the carryover ceilings as shown in table 6. According to Navy budget documents and officials, the reasons why the actual reported carryover exceeded the ceiling for the aviation depots were (1) the lack of material to repair the components being fixed; (2) the increased deterioration of components, leading to longer repair cycles; and (3) the large dollar amount of orders financed with supplemental appropriations for fiscal year 2003. While the budgets show that the Navy research and development activity group did not exceed the ceiling for any of the 4 years, the budgets no longer provide information that shows if any of the five subactivity groups individually exceeded the carryover ceiling, as the Navy budgets did prior to the change in the carryover policy in December 2002. Prior to the Office of the Under Secretary of Defense (Comptroller) changing its carryover policy in December 2002, the Navy Working Capital Fund budget provided carryover information, such as the dollar amount of carryover and the number of months of carryover for each of the subactivity groups. An analysis of the budget documents would show if any of the subactivity groups exceeded the 3-month carryover standard. After DOD changed the carryover policy in December 2002, the Navy changed the level of reporting carryover information to be at the aggregate level and no longer provided carryover information at the subactivity group level. Our analysis of Navy reports showed that the Naval Air Warfare Center exceeded the ceiling for fiscal years 2003, 2004, and 2005, and the Naval Surface Warfare Center exceeded the ceiling for fiscal year 2002, as shown in table 7. According to the Navy, there are two reasons why carryover should be reported at the activity group level and not at the subactivity group level. First, the Office of the Under Secretary of Defense (Comptroller) required the research and development activities to use a higher outlay rate for orders financed with procurement appropriations than the official published procurement outlay rates. Using a higher procurement outlay rate for calculating the carryover ceiling lowers the amount of allowable carryover. Because of this higher rate, a Navy official stated that the carryover should be reported at the aggregate level since subactivity groups reporting under the ceiling will offset those subactivity groups reporting over the ceiling. Second, the new methodology did not allow the exclusion of intrafund orders from the carryover calculation. These are orders placed by one research and development activity with another research and development activity. Since intrafund orders were no longer allowed to be excluded from the carryover calculation, this resulted in the double counting of actual carryover associated with these intrafund orders. Because the above two reasons reduce the carryover ceiling and increase the actual amount of carryover, the Navy reports the carryover information at the activity group level. However, we believe that the carryover associated with the research and development activity group should be reported at the subactivity group level for several reasons. First, according to the fiscal year 2007 budget, the research and development activity group is the largest Navy activity group—it received about $10 billion of new orders and carried over about $3.5 billion for fiscal year 2005. By comparison, for fiscal year 2005, the Navy shipyards received about $1.8 billion of new orders and carried over about $636 million, and the aviation depots also received about $1.8 billion and carried over about $470 million. Further, the dollar amount of new orders received by three research and development subactivity groups (Naval Surface Warfare Center--$3.4 billion, Naval Air Warfare Center--$2.7 billion, and Space and Naval Warfare Systems Centers--$2.2 billion) exceeded the amount of new orders received by the shipyards and aviation depots for fiscal year 2005. Because of the dollar magnitude of research and development subactivity groups, we believe carryover reporting at the subactivity group level is needed for Congress and DOD to maintain oversight. Second, concerning the Navy’s comment on using a higher outlay rate for calculating the carryover ceiling, we agree with the Office of the Under Secretary of Defense (Comptroller) that the Navy should use a higher rate. We also believe that the Navy should report carryover information at the subactivity group level from a disclosure standpoint. Otherwise, subactivity groups reporting under the ceiling will offset those subactivity groups reporting over the ceiling and this information would not be available in the budgets to Congress. In the December 2002 management initiative decision, the Office of the Under Secretary of Defense (Comptroller) stated that research and development activities could achieve better results than the established outlay rates for orders financed with procurement appropriations because of the type of work performed by these activities. DOD further stated that 45 percent of the fiscal year 2002 carryover was linked to contractual efforts and 55 percent supported in-house requirements. DOD concluded that carryover linked to contractual obligations would disburse at the procurement appropriations rate. However, the amount supporting the in-house requirements would disburse at a higher rate because such requirements tend to be funded on an annualized basis. The Office of the Under Secretary of Defense (Comptroller) requested that the Navy examine the nature and scope of the procurement-funded work and report its recommendations by February 15, 2003, to the Comptroller. At the time DOD issued the management initiative decision in December 2002, the Navy reported carryover information to Congress at the subactivity group level. The Navy determined, based on work performed by one Warfare Center, that the outlay rate for Navy procurement appropriations should be 40 percent, which is higher than the actual outlay rate for these appropriations. However, the December 2002 management initiative decision did not discuss the Navy changing its level of reporting carryover information from the subactivity group level to the aggregate level. Third, concerning the Navy’s comments on intrafund orders, the Navy is correct in that the amount of actual carryover will be double counted. However, the effect of this is negated since the amount of allowable carryover is also double counted since both of these activities will include the intrafund order as a new order and include the new order in their calculations for determining the allowable amount of carryover. Furthermore, in May 2001 we reported that the Navy was not following DOD’s guidance on calculating carryover on intrafund orders. Specifically, Navy working capital fund activities reduced carryover for orders received from other working capital fund activities. However, Navy working capital fund activities categorized orders they sent to other working capital fund activities as contractual obligations and used these obligations to reduce reported year-end carryover. As a result, not only did the Navy eliminate the double counting of such orders, it eliminated all these orders from its calculations, thus understating the equivalent number of months of carryover work. Carryover is greatly affected by orders accepted late in the fiscal year that generally cannot be completed, and in some cases cannot even be started, prior to the end of the fiscal year. As a result, almost all orders accepted late in the fiscal year increase the amount of carryover. We analyzed 68 orders accepted in September 2003 and September 2004 by certain activity groups for the three military services. Our analysis identified four key factors contributing to orders generally being issued by customers late in the fiscal year and being accepted by the working capital fund activities during the last month of the fiscal year. These reasons included (1) funds provided to customers late in the fiscal year to finance existing requirements, (2) new work requirements identified at year end, (3) problems encountered in processing orders, and (4) work scheduled at year end. Further, our analysis showed that 39 of the 68 orders—over half of the orders reviewed—were not complete at the end of the next fiscal year, generating a second year of carryover. In addition to increasing carryover amounts, orders accepted by working capital fund activities late in the fiscal year, in which these activities do not perform the work until well into the next fiscal year or even subsequent years, may not (1) be the most effective use of DOD resources at that time and (2) have complied with all of the order acceptance provisions cited in the DOD Financial Management Regulation. As noted in our scope and methodology (app. I), the scope of our work for this review did not include determining whether there was a bona fide need for the work being ordered by customers. As shown in figure 1, our review of 68 fiscal year-end orders for 2003 and 2004 identified four key factors contributing to orders generally being issued by customers late in the fiscal year and being accepted by the working capital fund activities during the last month of the fiscal year. As depicted in figure 1, the factor contributing most frequently to orders being accepted by working capital fund activities late in the fiscal year—29 of the 68 orders (43 percent) we reviewed—is the late receipt of funds from customers to finance existing requirements. DOD customers stated that it is common for the military services to provide funds to them late in the fiscal year after the military services review their programs to identify funds that will not be obligated by year end. When these funds are identified, the military services realign the funds to programs that can use them. These funds are then used to finance orders placed with working capital fund activities at year end. Further, in fiscal years 2003 and 2004, the military services received supplemental appropriations from Congress to fund ongoing military operations. Some of these funds were distributed to DOD customers late in the fiscal year to finance repairs on DOD assets. The following examples illustrate situations when funds were provided to customers late in the fiscal year. On September 4, 2003, the Ogden Air Logistics Center accepted an order from the Air Force Ground Theater Air Control System program office totaling about $4.8 million financed with operation and maintenance funds that would have expired on September 30, 2003. This order provided for Ground Theater Air Control System hardware and software upgrades. According to program office officials, the Air Combat Command traditionally funds about 60 to 70 percent of its total software development requirements annually. However in August 2003, the Command provided the program office with funding to cover 100 percent of its fiscal year 2003 software requirements. Thus, the program office applied the funds to its next highest priority workload and issued the $4.8 million order. On September 27 and 29, 2003, the Space and Naval Warfare Systems Center in San Diego accepted two orders from the U.S. Pacific Fleet totaling approximately $4.15 million financed with operation and maintenance funds that would have expired on September 30, 2003. These two orders were to provide the technical and engineering support for the relocation of a Sea-Based Battle Laboratory from the USS Coronado to a new ashore headquarters activity. The Pacific Fleet identified this requirement in early fiscal year 2003; however, funds were not made available until the end of the fiscal year, when additional funds were identified from other programs. On September 26, 2003, the Red River Army Depot accepted an order from the Army Tank-automotive and Armaments Command totaling $17.9 million financed with operation and maintenance funds that would have expired on September 30, 2003. The order was for the repair and upgrade of 41 Bradley Fighting Vehicles needed to support the war effort in Iraq and Afghanistan. These vehicles were to be prepositioned in the theater of operation. According to a Tank-automotive and Armaments Command official, the order was issued late in fiscal year 2003 because the Army Materiel Command did not provide them with funding until September 2003. An Army Materiel Command official noted that the effort was funded by a supplemental appropriation used to support war operations. The second most significant factor that contributed to the year-end orders we reviewed—17 of the 68 orders (25 percent)—was the identification of new requirements at year end. Some examples of DOD customers identifying requirements at year end include (1) a Navy aviation depot in performing scheduled maintenance identified damage to aircraft beyond what was originally included in its statement of work, (2) an Army depot identified inspection requirements at year end to keep ammunition storage inspections current and to satisfy requisitions to support the soldiers in the field, (3) Navy aircraft repair requirements were moved from fiscal year 2004 to fiscal year 2003 to meet an earlier deployment schedule, (4) the Army identified new requirements at year end for repair of Army assets necessary to support ongoing military operations, and (5) the Navy identified the need for additional capabilities for several aircraft and also needed to perform emergency repairs on one of its aircraft carriers. Two examples of some of the reasons for new requirements being identified at year end follow. On September 27 and 30, 2004, the Space and Naval Warfare Systems Center in Charleston accepted an order and an amendment from the Commander, Naval Air Force, U.S. Atlantic Fleet, totaling $425,000 financed with operation and maintenance funds that would have expired on September 30, 2004. A fleet official stated that it had received a casualty report from the USS Harry S. Truman on September 24, 2004, indicating repairs needed to be made to the ship’s announcing system. An activity official stated that the order was accepted regardless of carryover concerns due to the urgency associated with a casualty report. Additionally, a fleet official noted that the time to complete the needed repairs was limited due to the ship’s impending deployment. On September 8, 2004, the Army Rock Island Arsenal accepted an order from the Tank-automotive and Armaments Command totaling about $1.4 million financed with operation and maintenance funds that would have expired on September 30, 2004. The order was for the reconditioning of chemical biological protective shelters. The shelters mount on high- mobility, multipurpose wheeled vehicles and provide an environmentally controlled work area that filters out nuclear, biological, and chemical agents. According to a logistics manager, in the fourth quarter of fiscal year 2004 the Tank-automotive and Armaments Command identified 11 shelters that needed reconditioning and issued an order to the Army Rock Island Arsenal for the work. Further, we found 12 of the 68 orders (18 percent) were accepted by working capital fund activities in the last month of the fiscal year due to problems encountered with processing the orders. These problems included (1) delays in processing forms through different activities and multiple nonintegrated systems, (2) data input errors that were not corrected until September, and (3) difficulties encountered in processing documents and related funding from non-DOD customers to working capital fund activities. Two examples follow. In July 2003, the Air National Guard Headquarters prepared documentation that directed the Pennsylvania Air National Guard to send its ground mobile navigation radar to the Tobyhanna Army Depot to repair damage sustained by the radar system from multiple lightening strikes and power surges and to overhaul the system. The order was not accepted by the depot until September 26, 2003, about 3 months later. The delay in acceptance of the order was due to (1) the normal time required to process forms through six different activities using nonintegrated systems, (2) paperwork processing delays due to missing information, (3) confusion on how to process the workload in a new Army system implemented in July 2003, and (4) errors made in entering data into the Army system. Due to delays in correcting input errors on an order, the Warner Robins Air Logistics Center did not accept a $2.8 million order from the F-15 program office, financed with operation and maintenance funds, until September 17, 2004. The order was for the maintenance of an Air National Guard F-15 aircraft. When an order was generated by the F-15 program office on June 10, 2004, the office entered the program control and serial numbers into the project order system incorrectly. On September 17, 2004, the Center established a new order with the corrected information. Finally, we found 10 of the 68 orders (15 percent) were accepted by working capital fund activities in the last month of the fiscal year when DOD assets were scheduled for maintenance. According to Air Force and Navy officials, planning for the repair of major assets such as aircraft, ships, and engines begins several years prior to the date on which repairs will actually be performed. The assets are scheduled for maintenance based on routine cycles, such as numbers of years since the last depot maintenance was performed. The services include funding requirements for these repairs in their annual budget submissions. Generally, in the quarter the assets are scheduled for maintenance, the major commands distribute the repair funds to their customers and the customers, in turn, issue orders to fund the repair. Two examples follow. On September 16, 2003, the Oklahoma City Air Logistics Center accepted an order from the Air National Guard totaling about $7.2 million financed with operation and maintenance funds set to expire on September 30, 2003. The order was for the scheduled maintenance of the 39th Air National Guard KC-135E aircraft in fiscal year 2003. During fiscal year 2001, the Air National Guard determined that 39 KC-135E aircraft required maintenance in fiscal year 2003 in accordance with their 5-year maintenance schedule. In fiscal year 2001, the Air National Guard began planning and budgeting for the maintenance of these aircraft. The 39th aircraft arrived at the air logistics center in mid- September 2003 as planned. The Air National Guard issued the order in September 2003 once it determined that the work on this aircraft would be performed at the air logistics center instead of contracting out the workload. On September 29, 2003, the Naval Air Warfare Center-Aircraft Division accepted an order from the U.S. Atlantic Fleet in the amount of approximately $2.4 million financed with operation and maintenance funds set to expire the next day. The order required repairs and/or replacement of deteriorated and worn components to support flight deck operations on the USS Harry S. Truman. This work was scheduled for overhaul in fiscal year 2003. A fleet official stated that they did not perform an inspection of the ship to determine specific repair requirements until late in the fiscal year. Our further review of the 68 fiscal year-end orders for 2003 and 2004 disclosed that 39 of these orders—over half—were not completed within the next fiscal year, which resulted in carryover of 2 or more years. As we reported in June 2005, two reasons generally caused work to carryover into a second fiscal year. First, the depots received orders late in the fiscal year and were unable to complete the effort by year end, as discussed in the previous section; and second, some depots were unable to obtain the materials/parts needed in a timely manner to complete the work. In addition to these reasons, we found that some working capital fund activities were unable to complete work within 1 year because of delays caused by backlogged or other higher priority work and broken or unsafe repair equipment. These factors have resulted in orders being carried over for more than 1 fiscal year and increased the carryover balances for subsequent years. As a result, these orders may not have been the most effective use of DOD resources at that time and may not have complied with all of the order acceptance provisions cited in the DOD Financial Management Regulation. The DOD Financial Management Regulation 7000.14-R, Volume 11A, Chapters 2 and 3, prescribes regulations governing the use of orders placed with working capital fund activities. When a working capital fund activity accepts an order, the customer’s funds financing the order are obligated. The DOD regulation identifies a number of requirements before a working capital fund activity accepts an order. For example, work to be performed under the order shall be expected to begin within a reasonable time after the order is accepted by the performing DOD activity. As a minimum requirement, it should be documented that when an order is accepted, the work is expected to (1) begin without delay (usually within 90 days) and (2) be completed within the normal production period for the specific work ordered. Further, the regulation states that no project order shall be issued if commencement of work is contingent upon the occurrence of a future event. Our review of 68 orders accepted by the working capital fund activities at year end determined that work on some of these orders did not begin within 90 days or was not completed within the normal production period for the work being performed. The following examples illustrate orders that were accepted by working capital fund activities at year end and (1) may not be the most effective use of DOD resources at that time and (2) may not have complied with all of the provisions contained in this regulation. On September 25, 2003, the Crane Army Ammunition Activity accepted an order totaling $1,885,000 that was financed with operation and maintenance funds for X-ray work to determine the safety and usability of 200,000 rounds of 40-millimeter high-explosive ammunition. However, due to problems with the X-ray inspection machine, the activity had to suspend work on the ammunition until the inspection machine was qualified as safe to use. According to the program engineer, work was delayed because imaging panels in the inspection machine were burning up and had to be replaced. Compounding this problem was a delay in the approval process for the safe operation of the machine. As a result, very little work was completed on this order over 3 fiscal years. Specifically, $1,885,000 carried over into fiscal year 2004 and $1,881,105 carried over into fiscal year 2005 and again into fiscal year 2006. On September 14, 2004, the Ogden Air Logistics Center accepted an order totaling $3.4 million that was financed with operation and maintenance funds to build an F-16 radar test station on behalf of the Air National Guard. According to a center official, even though the depot did not have the material to build the station, it accepted the order late in the fiscal year. Thus, the entire $3.4 million order carried over into fiscal year 2005. The center official noted that during fiscal year 2005, the activity group ordered the material and began work assembling the station, but as of the end of fiscal year 2005 all the material had not yet been received from contractors. Therefore, $277,898 carried over into fiscal year 2006. On September 29, 2003, the Sierra Army Depot accepted an order totaling $11,680,175 that was financed with operation and maintenance funds for the receipt, inspection, storage, and re-containerizing of 203 containers of gas and oil pipeline equipment returned from Iraq and Afghanistan. Because this order was received so late in the fiscal year, the entire amount of the order—$11,680,175—was carried over into fiscal year 2004. According to the mission director, this order was delayed because (1) some containers were not returned from the war zones in a timely manner so the depot could refurbish them and (2) the depot received other, higher priority workloads, such as armored plating on wheeled vehicles. As a result, over half of the dollar amount of the order—$6,847,529—carried over into fiscal year 2005 and $2,643,093 carried over into fiscal year 2006. On September 16, 2003, the Space and Naval Warfare Systems Center in Charleston accepted an order for $232,200 that was financed with operation and maintenance funds. The U.S. Naval Forces Central Command identified and funded this emergent requirement in August 2003 in support of the Combat Terrorism Initiative during the Iraq war. More specifically, this order was for technical and installation services for a new communications link between Bahrain and Dubai. An activity official stated that minimal engineering services were initiated prior to the end of fiscal year 2003, so almost the entire dollar amount of the order—$230,000—carried over into fiscal year 2004. This official also stated the center encountered delays when the government of Dubai would not allow the leased line into the country from Bahrain. This resulted in $207,000 being carried over into fiscal year 2005, and $12,000 being carried over into fiscal year 2006. On September 11, 2003, the Oklahoma City Air Logistics Center accepted an order totaling about $1.8 million that was financed with operation and maintenance funds for the analytical condition inspection of a F110-129 engine. According to an Oklahoma City Air Logistics Center official, the center brought the engine in for repair in September 2003 to ensure that the funds were obligated by fiscal year end. Otherwise, the funds would expire and be unavailable for new workload. However, the center did not begin work on the engine until March 2004 due to a backlog of engines waiting for repair. Since the engine was accepted for repair in the last month of the fiscal year, almost the entire $1.8 million was carried over into fiscal year 2004. Further, because of production delays and a failed serviceability test, the center carried funds into fiscal year 2005 and again into fiscal year 2006—more than 2 years after the order was accepted. The military services have provided erroneous carryover information to Congress and DOD decision makers because the services have not consistently applied DOD’s revised policy on carryover. Reliable and consistent carryover information is essential for Congress and DOD decision makers to perform their oversight, including reviewing DOD’s budget to determine if an activity group has too much or not enough carryover. To provide greater assurance that the military services provide reliable and consistent carryover information, the military services must be held accountable for the accuracy of reported carryover information and ensure the timely identification of unneeded customer funds. While DOD’s guidance on calculating carryover was not adequate when it revised its carryover policy in 2002, DOD began improving the guidance in 2004. However, DOD has not updated the Financial Management Regulation so that it includes comprehensive carryover guidance to the military services, and the services have not always complied with the carryover guidance in the past. Until this is done, Congress and DOD decision makers will be forced to make key budget decisions, such as whether to enhance or reduce customer budgets, based on unreliable information. In addition, DOD working capital fund activities’ acceptance of year-end orders (1) increases the amount of carryover and (2) in some cases, contributes to DOD working capital fund activities’ actual carryover amounts exceeding their allowable amounts by tens of millions of dollars. Excessive amounts of year-end carryover tie up customer funds that could be put to better near-term use and are subject to reductions by DOD and the congressional defense committees during the budget review process. In order to improve the business operations of the Department of Defense Working Capital Fund, we are making the following eight recommendations to the Secretary of Defense. We recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to take the following actions: Issue written instructions in its DOD Financial Management Regulation 7000.14-R specifying the outlay rates to be used by DOD working capital fund activities for calculating the allowable amount of carryover and continue to issue carryover guidance to the military services in its annual guidance on preparing budget justification book material for Congress. Review the carryover information provided in the military services’ annual budget submissions to help ensure the services are calculating their allowable and actual carryover amounts in accordance with DOD policy. Reiterate the requirements in the DOD Financial Management Regulation 7000.14-R to help ensure that working capital fund activities are in compliance with the regulations governing acceptance of orders, particularly at fiscal year end. We recommend that the Secretary of Defense direct the Secretary of the Air Force to take the following actions: Use the current outlay rate tables that are included in the DOD Financial Summary Tables when calculating the allowable carryover amounts for the Air Force depot maintenance activity group, consistent with DOD policy. Use only current year orders for calculating the allowable carryover amounts for the Air Force depot maintenance activity group, as required by DOD carryover policy. Include all orders when calculating the amount of actual carryover for the Air Force depot maintenance activity group, except those orders that are specifically excluded in DOD Financial Management Regulation 7000.14-R or are excluded by the Under Secretary of Defense (Comptroller) in writing. Include the allowable and actual dollar amounts of carryover for the Air Force depot maintenance activity group in the Air Force’s annual budget to Congress, as required by DOD Financial Management Regulation 7000.14-R. We recommend that the Secretary of Defense direct the Secretary of the Navy to include the allowable and actual amounts of carryover for each of the five Navy research and development subactivity groups in the Navy’s annual budget to Congress. DOD provided written comments on a draft of this report. DOD concurred with all eight of our recommendations. Regarding its plans for implementing the eight recommendations, DOD stated that it is in the process of updating its financial management regulations and issuing budget guidance for the fiscal year 2008/2009 President’s Budget, which will address calculating the allowable amount of carryover. Further, DOD stated that it made a more rigorous review of the services’ carryover information in the fiscal year 2007 President’s Budget submission and that it will continue reviewing the services’ budgets to ensure that the services are calculating allowable and actual amounts of carryover in accordance with DOD policy. DOD also stated that it will direct the Navy to report carryover information for each of the five Navy research and development subactivity groups in the Navy’s annual budget to Congress. Finally, as preparation for the close out of fiscal year 2006, DOD will reiterate the guidance in its Financial Management Regulation governing the working capital fund acceptance of orders which obligates customers’ funds, particularly at year end. DOD also commented that the inaccuracies we identified in reported carryover did not materially distort the evaluation of depot operations or projected workload levels. While we do not know how DOD defines materiality, we believe that the reporting inaccuracies affect the evaluation of depot operations from a workload standpoint because the inaccuracies understated the carryover balances for some activity groups by hundreds of millions of dollars. For example, as stated in our report, the Air Force reported that its fiscal year 2002 depot maintenance carryover was $87 million under the ceiling but our calculation shows that the carryover exceeded the ceiling by $216 million, a difference of $303 million. In another case, the Army reported that its fiscal year 2003 depot maintenance carryover was $127 million over the ceiling but our calculations show that it was over the ceiling by $322 million, a difference of $195 million. As a result of these understatements, the amount of work carried over from one year to next was not reliable and could have affected DOD’s and the congressional defense committees’ review and evaluation of carryover during their annual budget review. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services; the Subcommittee on Readiness and Management Support, Senate Committee on Armed Services; the Subcommittee on Defense, Senate Committee on Appropriations; the House Committee on Armed Services; the Subcommittee on Readiness, House Committee on Armed Services; and the Ranking Minority Member, Subcommittee on Defense, House Committee on Appropriations. We are also sending copies to the Secretary of Defense, Secretaries of the Army, Navy, and Air Force, and other interested parties. Copies will be made available to others upon request. Should you or your staff have any questions concerning this report, please contact McCoy Williams, Director, at (202) 512-9095 or williamsm1@gao.gov, or William M. Solis, Director, at (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine if the military services’ carryover calculations were in compliance with the Department of Defense’s (DOD) new carryover policy, we obtained and analyzed the services’ calculations for the (1) reported year-end actual carryover balances for fiscal years 2002 through 2005 and (2) allowable amount of carryover for fiscal years 2002 through 2005. We recomputed the services’ calculations following DOD’s regulation on carryover and compared our carryover calculations with the services’ carryover calculations. We met with officials from the Army, Navy, and Air Force to discuss (1) the methodology the services used to calculate carryover and (2) any differences between our calculations and their calculations. We also met with officials from the Office of the Under Secretary of Defense (Comptroller) to discuss DOD’s new carryover policy, including the proper calculation for actual carryover and the allowable amount of carryover. To assess the reliability of the carryover data, we (1) reviewed and analyzed the factors used in calculating carryover and (2) interviewed officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes in this report. To determine if customers were submitting orders to working capital fund activities late in the fiscal year and, if so, the effect that this practice has had on carryover, we obtained data on orders accepted by working capital fund activities in September 2003 and September 2004. Initially, we obtained information on the top 20 orders from a dollar standpoint that selected working capital fund activities accepted from customers in September 2003 and September 2004. We analyzed the information on the orders, which included the appropriation financing the order, the date the order was accepted by the working capital fund activity, and a description of the work to be performed. We then selected and analyzed 68 orders with large dollar amounts that working capital fund activities accepted in September. We also interviewed (1) working capital fund officials to determine the current status of performing the work on the orders and (2) customers to determine the reasons why they sent the orders to the working capital fund activities late in the fiscal year. In performing our work on these orders, we did not review these orders to determine if there was a bona fide need for the work being ordered by customers. We performed our work at the headquarters offices of the Under Secretary of Defense (Comptroller), the Assistant Secretary of the Army (Financial Management and Comptroller), the Assistant Secretary of the Navy (Financial Management and Comptroller), and the Assistant Secretary of the Air Force (Financial Management and Comptroller), Washington, D.C. In performing our work on reviewing the services’ carryover calculations, we obtained carryover information on the following Defense Working Capital Fund activity groups: (1) Army depot maintenance, (2) Army ordnance, (3) Army industrial operations, (4) Air Force depot maintenance, (5) Naval aviation depots, (6) Naval shipyards, and (7) Naval research and development. The Naval research and development activity group consists of the following five subgroups: Naval Air Warfare Center, Naval Surface Warfare Center, Naval Undersea Warfare Center, Naval Research Laboratory, and the Space and Naval Warfare Systems Command Center. In performing our work on reviewing individual orders, we obtained information from the following working capital fund activities and their customers that submitted the orders. Blue Grass Army Depot, Richmond, Kentucky Crane Army Ammunition Activity, Crane, Indiana Rock Island Arsenal, Rock Island, Illinois Sierra Army Depot, Herlong, California Red River Army Depot, Texarkana, Texas Tobyhanna Army Depot, Tobyhanna, Pennsylvania The carryover information in this report is budget data obtained from official Army, Navy, and Air Force budget documents. We conducted our work from July 2005 through March 2006 in accordance with U.S. generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Defense or his designee. The Under Secretary of Defense (Comptroller) provided written comments, and these comments are presented in the Agency Comments and Our Evaluation section of this report and are reprinted in appendix II. Staff who made key contributions to this report were Richard Cambosos, Francine DelVecchio, Keith McDaniel, Clara Mejstrik, Greg Pugnetti, Chris Rice, and Hal Santarelli. | According to the Department of Defense's (DOD) fiscal year 2006 budget estimates, working capital fund activity groups (depot maintenance, ordnance, and research and development) will have about $6.3 billion of funded work that will be carried over from fiscal year 2006 into fiscal year 2007. The congressional defense committees recognize that these activity groups need some carryover to ensure smooth work flow from one fiscal year to the next. However, the committees have previously raised concern that the amount of carryover may be more than is needed. GAO was asked to determine (1) if the military services' carryover calculations were in compliance with DOD's new carryover policy and (2) if customers were submitting orders to working capital fund activities late in the fiscal year and, if so, the effect this practice has had on carryover. The military services have not consistently implemented DOD's revised policy in calculating carryover. Instead, the military services used different methodologies for calculating the reported actual amount of carryover and the allowable amount of carryover since DOD changed its carryover policy in December 2002. The military services did not consistently calculate the allowable amount of carryover that was reported in their fiscal year 2004, 2005, and 2006 budgets because they used different outlay rates for the same appropriation. The Air Force did not follow DOD's regulation on calculating carryover for its depot maintenance activity group, which affected the amount of allowable carryover and actual carryover by tens of millions of dollars and whether the actual carryover exceeded the allowable amount as reported in the fiscal year 2004, 2005, and 2006 budgets. The Army depot maintenance and ordnance activity groups' actual carryover was understated in fiscal years 2002 and 2003 because carryover associated with prior year orders was not included. While the Navy generally followed DOD's policy for calculating carryover, the Navy consolidated the reporting of carryover information for research and development activities. The Navy budgets no longer provide information to show if any of the five research and development subactivity groups individually exceeded the carryover ceiling as the Navy budgets did prior to the change in the carryover policy. As a result, carryover data provided to decision makers who review and use the data for budgeting are erroneous and not comparable across the three military services. For example, the Air Force reported to Congress that the actual fiscal year 2002 carryover for depot maintenance was $87 million less than the ceiling. If the Air Force followed DOD's policy, GAO's calculations show its carryover would have exceeded the ceiling by $216 million. Carryover is greatly affected by orders accepted by working capital fund activities late in the fiscal year that generally cannot be completed by fiscal year end, and in some cases cannot even be started prior to the end of the fiscal year. GAO's analysis of 68 fiscal year-end orders identified four key factors contributing to orders generally being issued by customers late in the fiscal year and being accepted by the working capital fund activities during the last month of the fiscal year. These reasons included (1) funds provided to the customer late in the fiscal year to finance existing requirements, (2) new work requirements identified at year end, (3) problems encountered in processing orders, and (4) work scheduled at year end. GAO's analysis showed that over half of the orders reviewed were not completed at the end of the next fiscal year, generating a second year of carryover on the same order. As a result, some orders may not have been the most effective use of DOD resources at that time and may not have complied with all of the order acceptance provisions cited in the DOD Financial Management Regulation. |
OMB was established under presidential reorganization authority in 1970, in large part to increase the attention given to management issues in the federal government. OMB is the lead agency for overseeing a framework of recently enacted financial, information resources, and performance planning and measurement reforms designed to improve the effectiveness and responsiveness of federal agencies. This framework contains as its core elements financial management improvement legislation, including the Chief Financial Officers (CFO) Act of 1990, the Government Management Reform Act of 1994, and the Federal Financial Management Improvement Act of 1996; information technology reforms, including the Paperwork Reduction Act (PRA) of 1995 and the Clinger-Cohen Act of 1996; and the Government Performance and Results Act of 1993 (the Results Act). OMB. In addition to serving as the government’s key official for financial management, the DDM is to coordinate and supervise a wide range of general management functions of OMB. These functions include those relating to managerial systems, such as the systematic measurement of performance; procurement policy; regulatory affairs; and other management functions, such as organizational studies, long-range planning, program evaluation, and productivity improvement. OMB is responsible for providing guidance and oversight for various other laws and executive orders as well. For example, the Federal Acquisition Streamlining Act (FASA) requires that executive agency heads set cost, performance, and schedule goals for major acquisition programs and that OMB report to Congress on agencies’ progress in meeting these goals. Executive Order 12866 directs OMB to coordinate the review of agencies’ rules and regulations to ensure that they impose the least burden, are consistent between agencies, focus on results over process, and are based on sound cost/benefit analysis. OMB also has been responsible since 1967, through its Circular A-76, for carrying out executive branch policy to rely on competition between the federal workforce and the private sector for providing commercial goods and services. OMB’s perennial challenge is to carry out its central management leadership responsibilities in such a way that leverages opportunities of the budget process, while at the same time ensuring that management concerns receive appropriate attention in an environment driven by budget and policy decisions. Concern that OMB and its predecessor agency, the Bureau of the Budget, lacked the support and institutional capacity necessary to sustain management improvement efforts throughout the executive branch has prompted numerous calls for changes in the past. overwhelmed by—the budget process. Some observers have advocated integrating the two functions, while others have proposed the creation of dedicated offices or a separate agency to provide governmentwide management leadership. Prior OMB reorganizations, reflecting these different points of view, have alternated between seeking to more directly integrate management into the budget review process and creating separate management offices. Previous congressional and OMB attempts to elevate the status of management by creating separate management units within OMB sought to ensure that an adequate level of effort was focused on management issues. Underscoring its concern that management issues receive appropriate attention, Congress established the DDM position to provide top-level leadership to improve the management of the federal government. In 1994, OMB reorganized to integrate its budget analysis, management review, and policy development roles, in an initiative called “OMB 2000.” This reorganization was the most recent of a series of attempts to bolster OMB’s management capacity and influence. To carry out its responsibilities, OMB’s Resource Management Offices (RMO) are responsible for examining agency budget, management, and policy issues. Linking management reforms to the budget has, at a minimum, provided the opportunity to include management issues as part of the president’s yearly budget reviews—a regularly established framework for making decisions. coordinate their activities with the statutory offices. In fiscal year 1997, OMB obligated $56 million and employed over 500 staff. During the past 3 years, OMB has focused increased attention on management issues, but there is much more that needs to be done. Today, we will highlight some of those issues that have been both of particular concern to this Committee and the subject of our recent work. OMB’s DDM and the OFFM, in concert with the CFO Council, have led governmentwide efforts to focus greater attention on financial management issues. OMB has played a pivotal role in fostering ongoing financial management reforms ranging from improved financial systems and reporting to new accounting standards. We are seeing positive results from OMB’s efforts. For example, eight agencies obtained unqualified opinions on their fiscal year 1997 audited financial statements, and OMB set a performance goal for it to assist 21 of the 24 CFO Act agencies to obtain unqualified and timely audit opinions on their annual financial statements for fiscal year 1999. In the 1997 Federal Financial Management Status Report and Five-Year Plan, OMB and the CFO Council discussed accomplishments and future plans in eight priority areas, such as improving financial management systems and implementing the Results Act. OMB also worked with the Department of the Treasury and GAO as part of the Federal Accounting Standards Advisory Board to create a comprehensive set of accounting and cost accounting standards that establish a framework for financial reporting and accountability. In addition, as we reported on March 31, 1998, the federal government prepared consolidated financial statements that have been subjected to an independent audit for the first time in the nation’s history. government’s ability to adequately safeguard assets, ensure proper recording of transactions, and ensure compliance with laws and regulations. With a concerted effort, the federal government as a whole can continue to make progress toward generating reliable financial information on a regular basis. Annual financial statement audits are essential to ensuring the effectiveness of the improvements now under way. OMB’s OFPP has worked to implement FASA and the Clinger-Cohen Act. OFPP has also been working to streamline the procurement process, promote efficiency, and encourage a more results-oriented approach to planning and monitoring contracts. OFPP is spearheading a multi-agency effort to revise parts of the Federal Acquisition Regulation. For example, a major revision to Part 15 completed last year will contribute greatly to a more flexible, simplified, and efficient process for selecting contractors in competitively negotiated acquisitions. OFPP also developed best practices guides to help agencies draft statements of work, solicitations, and quality assurance plans, as well as to aid in awarding and administering performance-based service contracts. OFPP issued a best practices guide for multiple award task and delivery order contracting to encourage agencies to take advantage of new authorities under FASA. In addition, OMB has encouraged agencies to buy commercial products, conduct electronic commerce, and to consolidate their ordering to take advantage of the buying power of the federal government. OMB’s efforts to improve capital decision-making are a third example of where OMB’s leadership efforts are yielding some results. For example, OMB has required agencies to submit 5-year capital spending plans and justifications—thus encouraging the use of flexible funding mechanisms—and also held the first OMB Director’s review on this issue.OMB added a new section to its fiscal year 1998 budget preparation instructions that outlined several broad principles for planning and monitoring acquisition and required agencies to develop baseline cost schedules and performance measurement goals. OMB has also implemented other policy and guidance changes to support new management decision-making requirements and the Chief Information Officers (CIO) Council has adopted the establishment of sound capital planning and investment management practices as one of its strategic goals. The development of the “Raines’ Rules”—requiring agencies to satisfy a set of investment management criteria before funding major systems investments—can potentially serve to further underscore the link between information technology management and spending decisions. These investment management practices are also required under the PRA and the Clinger-Cohen Act. The extent to which the Raines Rules make a difference will depend on how well and how consistently they are applied. To address widespread weaknesses in federal information security, the CIO Council, under OMB’s leadership, has taken some significant actions, which include designating information security as one of six priority areas and establishing a Security Committee. The Committee, in turn, has developed a preliminary plan for addressing various aspects of the problem and taken steps to increase security awareness and improve federal incident-response capabilities. However, much more needs to be done to monitor agency performance in this area and to ensure that the various management, policy, technical, and legal aspects of information security are effectively addressed. Continuing reports of information security problems are disturbing because federal agencies rely on automated systems and related security controls to support virtually all of their critical operations and assets and to ensure the confidentiality of enormous amounts of sensitive data. Our recent audit of the government’s fiscal year 1997 financial statements identified serious information security weaknesses at all 24 CFO agencies. Moreover, we are finding that most agencies have not addressed enhancing information security in their fiscal year 1999 performance plans. processes and supporting systems. More recently, OMB provided additional guidance stating that these contingency plans can be carried out in accordance with GAO’s contingency planning guide. The establishment of the President’s Council on Year 2000 Conversion provides an opportunity for the executive branch to take further key implementation steps to avert disruptions to critical services, as we outlined in our recent report. To date, however, progress has been slow, and agencies’ schedules often leave no room for delay. Many major departments have already missed earlier deadlines. At the current pace, it is clear that not all mission critical systems will be fixed in time, and additional attention therefore needs to be given to those systems that serve the highest priorities. We also have found that improvements are needed in the process used to review and clear regulations. We have testified on the inadequacies of OMB’s efforts to meet congressional paperwork reduction goals. Also, OIRA does not attempt to set priorities for agencies’ regulations on the basis of risk (e.g., the number of lives saved or injuries avoided). Concerns have been raised by experts in regulatory issues that federal regulations are not sufficiently focused on the factors that pose the greatest risk and that, as a result, large amounts of money may be spent to accomplish only a slight reduction in risk. Using these same resources in other areas that pose higher risks could yield significantly greater payoffs. limited efforts to monitor or enforce compliance with its A-76 guidance or evaluate the success of this process. Finally, OMB’s oversight role across the government can provide the basis for analyzing crosscutting program design, implementation, and organizational issues. We have pointed to the need to integrate the consideration of the various governmental tools used to achieve federal goals, such as loans, grants, tax expenditures, and regulations. Specifically, we recommended that OMB review tax expenditures with related spending programs during their budget reviews. In addition, our work has provided numerous examples of mission fragmentation and program overlap within federal missions, and we have suggested that OMB take the lead in ensuring that agency Results Act plans address fragmentation concerns. This effort may be hampered if efforts to resolve problems of program overlap and fragmentation involve organizational changes, because OMB lacks a centralized unit charged with raising and assessing government-organization issues. OMB has not had such a focal point since 1982 when it eliminated its Organization and Special Projects Division. Mr. Chairman, the record of OMB’s stewardship of management initiatives that we have highlighted today suggests that creating and sustaining attention to management improvement is a key to addressing the federal government’s longstanding problems. In the past, management issues often remained subordinated to budget concerns and timeframes, and the leverage the budget could offer to advance management efforts was not directly used to address management issues. The experiences to-date suggests that certain factors are associated with the successful implementation of management initiatives, regardless of the specific organizational arrangement. and performance measurement issues gained considerable attention in the budget formulation process initially because of the clear commitment of OMB’s leadership. However, top leadership’s focus can change over time, which can undermine the follow-through needed to move an initiative from policy development to successful implementation. Thus, although top leadership’s interest is an important impetus for the initiation of management policies, it alone is not sufficient to sustain these initiatives over time. Second, a strong linkage with the budget formulation process can be a key factor in gaining serious attention for management initiatives throughout government. Regardless of the location of the leadership, management initiatives need to be reflected in and supported by the budget and, in fact, no single organizational arrangement by itself guarantees this will happen. Many management policies require budgetary resources for their effective implementation, whether it be financial management reform or information systems investment. Furthermore, initiatives such as the Results Act seek to improve decision-making by explicitly calling for performance plans to be integrated with budget requests. We have found that previous management reforms, such as the Planning-Programming-Budgeting-System and Management By Objectives, suffered when they were not integrated with routine budget presentations and account structures. Third, effective collaboration with the agencies—through such approaches as task forces and interagency councils—has emerged as an important central leadership strategy in both developing policies that are sensitive to implementation concerns and gaining consensus and consistent follow-through within the executive branch. In effect, agency collaboration serves to institutionalize many management policies initiated by either Congress or OMB. In our 1989 report on OMB, we found that OMB’s work with interagency councils were successful in fostering communication across the executive branch, building commitment to reform efforts, tapping talents that exist within agencies, keeping management issues in the forefront, and initiating important improvement projects. One example of this collaboration is the continuing success of CFOs and the CFO Council in leading agencies in addressing a wide range of financial and related management issues, such as their work, in concert with OMB, on a strategic plan to upgrade and modernize federal financial management systems. Finally, support from the Congress has proven to be critical in sustaining interest in management initiatives over time. Congress has, in effect, served as the institutional champion for many of these initiatives, providing a consistent focus for oversight and reinforcement of important policies. For example, Congress’—and in particular this Subcommittee’s—attention to the Year 2000 problem, information management, and financial management, has served to elevate the problem on the administration’s management agenda. Separate from the policy decisions concerning how best to organize and focus attention on governmentwide federal management issues, there are some intermediate steps that OMB could take to clarify its responsibilities and improve federal management. For example, OMB could more clearly describe the management results it is trying to achieve, and how it can be held accountable for these results, in its strategic and annual performance plans. Many of OMB’s strategic and annual goals were not as results-oriented as they could be. Continued improvement in OMB’s plans would provide congressional decisionmakers with better information to use in determining the extent to which OMB is addressing its statutory management and budgetary responsibilities, as well as in assessing OMB’s contributions toward achieving desired results. In our 1995 review of OMB 2000, we recommended that OMB review the impact of its reorganization as part of its planned broader assessment of its role in formulating and implementing management policies for the government. OMB has not formally assessed the effectiveness, for example, of the different approaches taken by its statutory offices to promote the integration of management and budget issues. We believe it is important that OMB understand how the reorganization has affected its capacity to provide sustained management leadership. Mr. Chairman, this concludes our statement. We would be happy to answer any questions that you or other Members of the Subcommittee have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed its observations on the Office of Management and Budget's (OMB) efforts to carry out its responsibilities to set policy and oversee the management of the executive branch. GAO noted that: (1) OMB is the lead agency for overseeing a framework of recently enacted financial, information resources, and performance planning and measurement reforms designed to improve the effectiveness and responsiveness of federal agencies; (2) OMB's perennial challenge is to carry out its central management leadership responsibilities in a way that leverages opportunities of the budget process, while at the same time ensuring that management concerns receive appropriate attention in an environment driven by budget and policy decisions; (3) OMB's Deputy Director for Management and the Office of Federal Financial Management, in concert with the Chief Financial Officers Council, have led governmentwide efforts to focus greater attention on financial management issues; (4) OMB has played a pivotal role in fostering ongoing financial management reform, ranging from improved financial systems and reporting to new accounting standards; (5) despite this progress, GAO was not able to form an opinion on the reliability of the federal government's consolidated financial statements because of serious deficiencies; (6) OMB's Office of Federal Procurement Policy (OFPP) has worked to implement the Federal Acquisition Streamlining Act and the Clinger-Cohen Act; (7) OFPP has also been working to streamline the procurement process, promote efficiency, and encourage a more results-oriented approach to planning and monitoring contracts; (8) OMB's efforts to improve capital decisionmaking are a third example of where OMB's leadership efforts are yielding some results; (9) to address widespread weaknesses in federal information security, the Chief Information Officers (CIO) Council, under OMB's leadership, has taken some significant actions; (10) agencies' computer systems' year 2000 compliance remains a concern, and serious vulnerabilities remain, although OMB, the President's Council on Year 2000 Conversion, and the CIO Council all have focused attention on increasing compliance; (11) GAO also found that improvements are needed in the process used to review and clear regulations; (12) OMB's Circular A-76 sets forth federal policy for determining whether commercial activities associated with conducting the government's business will be performed by federal employees or contractors; (13) OMB's oversight role across the government can provide the basis for analyzing crosscutting program design, implementation, and organizational issues; and (14) the experiences to date suggest that certain factors are associated with the successful implementation of management initiatives. |
Since 1955, the executive branch has encouraged federal agencies to obtain commercially available goods and services from the private sector through competitions when the agencies determined that such action was cost-effective. OMB formalized the policy in OMB Circular A-76, issued in 1966. As part of this process, the government identifies the work to be performed—described in the performance work statement—and prepares an in-house cost estimate, based on its most efficient organization (MEO), and compares it with the best offer from the private sector. Between 1978 and 1994, competition winners were split between the private and public sectors. Appendix II contains a more detailed description of the A-76 process. Because of lengthy time frames previously required to perform competitive sourcing studies, a provision was included in the DOD Appropriations Act for Fiscal Year 1991 (P.L. 101-511), and in subsequent DOD appropriation acts requiring that single function competitions (under Circular A-76) be completed within 24 months and multifunction competitions within 48 months. Because of administrative and legislative constraints from the late 1980s through 1995, there was a lull—and for some time even a moratorium—on competitions. In 1995, congressional and administration initiatives placed more emphasis on competitive sourcing as a means of achieving greater economies and efficiencies in operations. The Deputy Secretary of Defense in 1995 directed the services to make outsourcing of support activities a priority. Subsequently, DOD placed emphasis on competitions involving both the public and private sectors, known as competitive sourcing. DOD components identify functions eligible for competitive sourcing studies from a list of commercial activities. Under OMB and DOD guidance, the components must maintain and periodically update their lists of commercial functions, but until fiscal year 1997, they were only required to consider commercial positions that were not inherently governmental in nature. In 1997, DOD directed its components to include inherently governmental functions on their lists. Because of concern over inconsistencies within and among the services in identifying positions eligible for competition, the House National Security Committee in report number 105-132 on H.R. 1119, the Defense Authorization Act for Fiscal Year 1998, directed DOD to develop a uniform set of criteria. DOD’s components are currently reviewing which functions performed by DOD personnel are (1) inherently governmental, (2) exempted from competition for national defense reasons, (3) exempted from competition for other reasons, or (4) subject to competitive sourcing competitions. DOD expected to report the results of this reassessment in January 1999. As we and others have reported, A-76 competitions can be cost-effective.Data indicates that savings can occur, regardless of whether the competitions are won by the public or the private sector. Savings may increase if, in accordance with applicable legal standards, multiple functions can be grouped together under a single contract rather than under multiple contracts. Because the average military positions are more costly than their civilian equivalents, greater savings may occur if DOD converts military support positions to government civilian or contractor positions. While competitions can produce significant savings, caution is needed when estimating the overall magnitude of potential savings. Estimates of savings in the 20- to 30-percent range or higher have been cited in some assessments of previous competitive sourcing studies but often have been based on initial savings estimates from previous competitions, rather than on actual savings over time. DOD has not systematically tracked or updated the savings estimates from competitions. Further, the savings from current competitions may not necessarily match those achieved in competitions completed before defense downsizing—because personnel cuts carried out during downsizing helped streamline organizations and eliminate unneeded positions. DOD has established far greater and more aggressive goals for competitions than in the past. DOD also estimates that the competitions will bring significant cost savings. OMB has recognized DOD as the pacesetter among government agencies in the use of competitions to gain economies and efficiencies in operations and to reduce support costs. However, achieving the goals of an initiative of this magnitude is a significant management challenge. According to DOD, between 1979 and 1996, it studied over 90,000 positions using the A-76 process. In early 1998, DOD components outlined plans to compete over 225,000 positions between fiscal year 1997 and 2003. The number of positions planned for competition during this period is more than twice the number of positions studied in the previous 17 years. Table 1 summarizes the plans by individual components as of early 1998. According to our analysis, DOD’s data indicates that about 79 percent of the positions identified in table 1 are civilian positions, while 21 percent are military positions. Over half of all the military positions to be competed are in the Air Force. The greatest number of positions competed would occur during fiscal years 1999 and 2000. As indicated in figure 1, if DOD components launch the competitions projected for fiscal years 1999 and 2000, and if each competition lasts 24 months, DOD could be competing over 100,000 positions each year during 1999 and 2000. Between March 1997 and early 1998, DOD increased the number of positions it plans to compete over the next several years by about 30 percent—from 171,000 to over 225,000. In October 1998, DOD again increased the number of positions expected to be competed to over 237,000 and stretched out the time frame to 2005. However, that figure was recently adjusted to 229,000 by 2005. We were unable to obtain details of how the new numbers would be allocated among the services. Prior to establishing the competition goal of 229,000 positions, DOD aimed for cumulative savings of about $6 billion between fiscal years 1997 and 2003. That goal still existed at the time we completed our review and DOD has already begun to reduce future years operating budgets of components in anticipation of these savings and to transfer the expected savings to their research and development and procurement accounts to increase funding for weapon system modernization. Office of the Secretary of Defense (OSD) guidance projects that components will complete competitions within 2 years and begin transferring funds to higher resource priority budget objectives. According to OSD officials, if the savings do not occur as quickly as planned, the components will have to absorb the shortfalls in their operations and maintenance accounts or shift money back from planned modernization. The Army’s competitive sourcing strategic plan, for example, states that if major commands do not achieve programmed savings, they will have to achieve the savings through other efficiencies. The savings estimates could change. A DOD official told us that, after receiving more detailed information from its components, DOD reduced its projected annual recurring savings as of fiscal year 2004 from $2.5 billion to $2.3 billion. This savings figure was still under review when we completed our fieldwork and OSD had not yet decided whether to revise its savings goals or its timetables for achieving them when we completed our review. The projections of competition savings that DOD provided to Congress in fiscal year 1998 appear overstated. The projections did not adequately consider investment costs related to performing A-76 cost studies. In addition, the competitions will likely take longer to complete than estimated. Both of these factors will affect how quickly DOD components will begin to realize net savings from the competitions. DOD components have expressed concerns about their ability to meet the savings goals. DOD is working to improve these estimates for its fiscal year 2000 budget request. Much like base realignment and closure actions, competitions have up-front investment costs that need to be considered when estimating net savings. In competitions, these investments involve study costs, personnel separation costs, and, in the case of the Army, the costs of substituting civilians for military personnel. Once these investment costs have been offset by program savings, net savings can begin to accrue on an annual recurring basis. However, available information indicates that OSD and its components have not fully and consistently accounted for and deducted these investment costs from their savings projections. This means that DOD will not accrue the estimated initial savings as quickly as projected. However, recurring long-term net savings are potentially significant. Both OSD and its components made initial assumptions about competition study costs that are understated. While the components are registering concern about these costs, they have not yet developed comprehensive assessments of them. DOD reported to Congress in April 1998 that the services expected savings of about $5.8 billion from their competitions and investment costs of $277 million to conduct the competitions. Table 2 indicates the savings projected by each service and the identified costs to implement the program. DOD’s savings projections provided an inconsistent and incomplete picture of A-76 competitive sourcing costs and savings for the period ending in fiscal year 2003. Only the Navy deducted identified investment costs from its net savings estimate. Additionally, available information indicates that DOD and its components understated or later changed their initial estimates of investment costs. Although the magnitude of the competition program greatly eclipses previous efforts, DOD components have not yet fully identified the resources needed to carry out the competitions. Many components are now projecting that the competitions will likely take much longer and hence require a greater investment of resources than they originally expected and reported to Congress. Many components have noted that this situation is occurring when they have significantly fewer in-house personnel trained to deal with A-76 programs than they had prior to downsizing. Conducting the competitions may require the use of contractors in addition to existing in-house staff from contracting, personnel, legal, manpower, accounting, internal audit, and the function being studied. To the extent existing in-house resources are limited, if resources need to be shifted to meet new missions, such as performing competitions, other tasks or activities may be delayed or not performed. DOD initially established a benchmark estimate for competition costs of $2,000 per position. The benchmark was based on an Air Force analysis of the costs it incurred in performing A-76 studies with in-house personnel. However, that analysis did not include an estimate of costs for developing in-house most efficient organizations, raising concerns that it may have understated the magnitude of the needed resources. In DOD’s April 1998 report, the military services estimated different investment costs, some of which were higher than the DOD benchmark. The Air Force did not identify costs because it planned to use only existing in-house staff to perform the work, but will now augment some studies with contractor support. The Navy, however, only identified estimated contractor support costs for conducting the competitions of over $2,400 per position. The Army based its investment cost estimate only on contractor support costs of $1,000 for each civilian position, but it did not include funding for competing military positions. The Marine Corps estimated that its competitions would cost $6,700 per position and that at least 80 percent of this would fund contractor support. Various officials told us that the resource requirements for the studies are much greater than both DOD’s $2,000 benchmark and their service’s own initial estimates. For example, officials at one Army major command estimated that they would employ about $28 million in resources for their competitions—$4 million for centrally funded contractor support costs and $24 million for existing in-house staff—to compete 4,000 positions in a multifunction, multilocation study, at least $7,000 per position. One Navy command estimated that it would incur about $15 million in costs—$2.8 million for contractor support and $12.2 million for existing in-house staff—to compete close to 1,930 positions at various locations, or about $7,800 per position. A second Navy command estimated that it was spending between $7,000 and $9,000 per position—about $2,000 for centrally funded contractor support and between $5,000 and $7,000 for existing in-house staff to conduct competitions. Command officials stated that the command had not received any additional funding for the competitions and that the command would therefore have to provide the additional resources. The large number of competitions planned for the future could necessitate a change in the mix of in-house and contractor personnel required to support the planned competitions. Such changes would affect the extent to which additional funding outlays could be required in addition to those already associated with in-house personnel. While none of the services has yet fully determined the staff resources necessary to implement its competition program, some service officials have expressed concern about their ability to provide sufficient existing in-house staff as the number of ongoing studies increases and the potential effect on other mission requirements of devoting available resources to meet competition needs. Some officials have already begun to express concern about the adequacy of their resources to initiate and complete ongoing competitions and to deal with other ongoing mission responsibilities. Officials at one Army command stated that they have finite resources to accomplish their overall missions and tasks. If one mission, such as performing competitions, is given command priority, resources are shifted to meet that priority, and other tasks or activities may be delayed or not performed. The large increase in the number of competitions expected to be ongoing in fiscal years 1999 and 2000 is likely to greatly increase resource requirements. Without allocating sufficient resources to complete the competitions, DOD components may be unable to initiate or complete previously announced competitions within reasonable time frames. The pressure to complete such a large volume of competitions at one time increases the risk of poorly developed performance work statements, which have historically been cited as a problem area in the competitions. Poor performance work statements require subsequent revisions, reducing the levels of savings from that initially expected. In July 1998, DOD issued guidance directing the components, when preparing operation and maintenance budget justification material for the fiscal year 2000 defense budget, to (1) report actual and projected competition costs, (2) explain the methodology used to develop the costs, and (3) justify deviations from the average cost of $2,000 per position. This information should become available when DOD releases its fiscal year 2000 budget request. Except for the Navy, the services understated investment costs because they did not include separation costs for civilian and military DOD employees who lose their jobs as a result of competitions won by the private sector. Implementation costs may also be incurred when in-house organizations win the competitions and the most efficient organizations require a smaller workforce. Assuming that the private sector continues to win competitions at the historic rate of 50 percent as determined by the Center for Naval Analyses, DOD could transfer work involving more than 100,000 positions to the private sector over the next several years—if it meets its goal of competing over 225,000 positions. Many of the affected civilian government workers could receive some form of separation pay. The Army, for example, estimated an average cost of $21,000 per person separated. This average covers the costs of voluntary early retirement, voluntary separation incentives, and involuntary separations through reduction-in-force procedures. The Navy estimated an average of $25,000 per person and the Air Force an average of $33,000, of which $25,000 would be funded by headquarters and $8,000 would be funded by individual commands. Even if some affected employees fill other positions through DOD’s priority placement program, significant numbers of government personnel could still be separated. On the basis of its average separation cost of $21,000 per employee, the Army’s Program Analysis and Evaluation office conservatively programmed separation costs of about $200 million for only 9,600 employees. The office recognized, however, that the Army would likely separate more personnel. The Air Force programmed $10 million in civilian separation costs for fiscal year 1999 and programs only 1 year in advance. The Navy did not program any separation costs and will not have a programming estimate until January 1999 for inclusion in the fiscal year 2000 budget. However, the Navy’s competitive sourcing office projected civilian separation costs of $819 million for 68,250 civilian positions and deducted these costs to reach the savings estimate of $2.5 billion through fiscal year 2003, that was reported to Congress in 1998. In July 1998, DOD issued budget guidance directing defense organizations, when preparing their operation and maintenance justification materials for the fiscal year 2000 defense budget, to report transition costs (such as separation pay and voluntary separation incentive pay) they plan to incur and to disclose the methodology and cost categories used to determine those costs. We have previously reported that, on average, the cost of civilian personnel is less than the cost of military personnel. In addition, we have also reported that the conversion of positions from military to civilian (either government or contractor) as part of the competitive sourcing process could save money, assuming that the elimination of the military positions results in corresponding reductions in the authorized end strengths. Such reductions are not expected to be the case for the Army where competitive sourcing eliminates requirements for military positions, without corresponding reductions in authorized end strength. The Army’s 1998 plan called for competing 8,414 military positions.However, while the Army plans to convert all military positions competed to civilian or contractor positions, it does not expect to take equivalent reductions in military end-strength. Rather, it expects to use military personnel released as a result of the competitions to fill other priorities. Thus, the Army’s overall costs will increase by the cost of civilian or contractor personnel selected to replace these military personnel. At the same time, the Army will have to absorb some increases in operations and maintenance costs without additional funding for increased civilian government or contractor costs. As mentioned previously, planned competitions will probably take longer than initially projected. In addition to increasing the study costs, this will also delay net savings. Meanwhile, the services are voicing concerns about their ability to meet the savings targets needed to offset operating budget reductions taken in advance and in anticipation of the savings. In launching the competition program, DOD and its components made assumptions about the amount of time needed to complete these competitions. DOD’s guidance for preparation of the fiscal year 2000 defense budget indicates that competitions should typically take about 24 months to complete. The Army and the Navy initially set more optimistic goals, but many service officials later came to believe that many studies would take longer than 24 months. The Army’s Office of the Assistant Chief of Staff for Installation Management has served as the primary lead for Headquarters, Department of the Army, on issues affecting the implementation of the competitions. This office set competition goals of 13 months for up to 100 positions, 18 months for 101 to 600 positions, and 21 months for over 600 positions. However, Army officials recently expressed some concern about their ability to meet this schedule. For example, the Army’s Training and Doctrine Command is conducting a command-wide study of 4,001 positions at 12 installations. This was announced to Congress in November 1996, and Army headquarters projected completion within 24 months. However, because the start of the study was delayed by 6 months and because the competitions cover multiple functions at 12 locations, with phased implementation, the command currently projects completing the last installation by February 2000, or 39 months after it was announced. Initially, the Navy projected completing its competitions in 12 months, but it revised its assumptions when preparing its 1998 plan because some competitions were taking longer. The 1998 plan estimates that competitions will take between 12 and 36 months, depending on their complexity and including whether they are based on competitions of single or multiple functions. An Air Force official responsible for program oversight told us that the Air Force currently projects completing competitions within 24 to 48 months and that it expects to meet these time limits. Since DOD began to emphasize competitions, the goals for the competitions have evolved and grown, even though some DOD components have had difficulties in meeting recent goals for announcing competitions. Some components have expressed concerns about these goals. OSD officials responsible for monitoring the program consider execution the biggest risk factor. DOD did not have under study all of the positions it planned to study in fiscal years 1997 and 1998 because some competitions that were announced were later canceled, and not all of the remainder were under study. DOD’s components planned to announce competitions involving 37,040 positions in fiscal year 1997, but, after cancellations and delayed starts of competitions, they had at most 34,997 positions under study. In fiscal year 1998, DOD’s components expected to announce competitions involving 52,630 positions, but due to shortfalls by the Navy, the Marine Corps, and the Air Force, they announced plans for competing only 35,710 positions and, after cancellations and delayed starts of competitions, had at most 32,229 positions under study. According to a Navy official, the Navy was unable to meet its fiscal year 1998 announcement goals because implementing concurrent initiatives such as competitions, regionalization, and consolidation, and meeting mission and mission support requirements stretched available personnel and financial resources. While it did not change the total number of positions it planned to compete between 1997 and 1998, the Navy did change the mix of military and civilian positions and some of its other planning assumptions to meet readiness needs and maintain its projected level of savings. In its fiscal year 1997 plan, the Navy projected competing 30,000 military positions. However, in response to growing concerns about the effect of competitions on the military positions needed to meet sea-shore rotation requirements and other concerns, the Navy in 1998 reduced the number of military positions it would compete by 20,000 and increased the number of civilian positions it planned to compete by the same number. Such a change could mean the potential for significantly less savings since military positions are recognized as relatively more costly to the government. Officials in the Air Force’s competitive sourcing and privatization office said that the Air Force, in developing its fiscal year 2000 budget request, reduced the total number of positions it planned to compete between fiscal year 1998 and 2003 by 23,976 (48 percent). The Air Force did so after completing an analysis and determining that some positions were not viable candidates because some positions were being double counted with ongoing base closure reductions and other positions were not practical to package for competition. However, the officials also said that OSD only agreed to a reduction of about 10,600 positions. Additionally, the Air Force proposed reengineering various functions to achieve additional savings of about $700 million, about $116 million more than was planned to have been saved with the competitions. As previously noted, Marine Corps officials have indicated that they do not believe they can meet their savings goals with the number of positions currently planned for competition. The officials said that the Marine Corps plans to increase the number of positions to be competed from 5,000 to about 6,200. One difficulty the services are likely to face as they try to identify more competition candidates is the continuing reduction in personnel caused by other ongoing defense reform efforts, cuts mandated by the Quadrennial Defense Review, or other initiatives. Reductions are also planned as a result of legislative requirements. These other ongoing defense reforms could limit the number of positions ultimately available for competition under the competitive sourcing program. Various service officials pointed to extensive reductions in base operating support budgets in recent years and expressed concern about the additional reductions that are expected in addition to cuts associated with competitions. They expressed concern about their ability to absorb further reductions “out of hide” should they miss their competition savings goals. Recently, officials in all of the services have voiced concerns about their ability to meet the savings goals established by OSD and the resulting effects, especially considering that the savings have already been taken out of future years’ operating budget estimates. For example, an Air Force official told us that the Air Force’s major commands will fall short of A-76 savings by about $141 million in fiscal years 1998 and 1999 and that they will have to absorb these shortfalls. Another Air Force official said that most major commands are concerned about the effects of funding A-76 competitions and of personnel separation costs on their installations. Army officials, based on work by the Army Audit Agency, have expressed concern that delayed competition starts could reduce the Army’s proposed fiscal year 2000 budgeted gross savings of $1.6 billion for fiscal years 1999 to 2003 by nearly $219 million—assuming the competitions are completed within the time frames initially projected, something which the officials consider unlikely. Another Army official indicated that even if the Army can complete all of its targeted competitions by 2003, it may take another 1 to 2 years to implement the results, reduce the workforce, and begin achieving the targeted savings. Additionally, the Army Audit Agency recently stated that the Army’s installations and major commands estimate that it will take about 50 percent longer than the time established by the Assistant Chief of Staff for Installation Management to complete the competitions and achieve the expected savings. Further, an official at the Army’s Training and Doctrine Command stated that the command would not meet its $62-million savings goal in fiscal year 1999 and most of fiscal year 2000. The official stated that the competitions are taking longer than Army headquarters officials projected and that could result in an operations and maintenance funding shortfall. Officials at the Naval Sea Systems Command stated that they do not believe A-76 competitions alone will be enough for the command to meet its savings goals because there are not enough positions to compete. While the command has a goal to compete 16,415 civilian positions, it had only 7,179 positions categorized as suitable for competition as of October 1998, after its commercial activities inventory review. Since the commercial activities inventory review was still ongoing when we completed our review, we were not able to obtain information on its overall results. In addition, the Navy’s acquisition executive stated in April 1998 that while the Navy would do everything possible to absorb the savings goals, he did not see any way to do this. He established a Process Action Team to review the competitive sourcing program because he believed that the savings were considerably overstated and would result in even more instability in the procurement account. Although Marine Corps officials told us that they expected to increase the number of positions to be competed, they also said, at another point, that they could not meet their savings target through A-76 competitions alone. They said they would attempt to make up the shortfall through alternative reform initiatives such as consolidation, regionalization of existing functions, and greater use of technology. DOD has provided the needed high-level emphasis, momentum, and sponsorship to energize its competition program and has identified what some have referred to as “stretch goals” in characterizing the larger number of positions to be competed. However, comprehensive planning among the services to identify specific functions and locations for competition has been limited. Detailed planning to implement the program has been largely delegated to components and field activities. These activities are responsible for determining the specific functions that are suitable candidates for competition and whether there are sufficient positions to meet overall competition goals. Such planning is needed to better identify long-term resource needs, especially considering the volume of studies likely to be under way in the future. To date, the Air Force appears to have performed the most detailed multiyear implementation analysis of its ability to attain its competition goals. The Army, the Navy, and the Marine Corps have not performed a multiyear implementation analysis by function and location, and the Navy and the Marine Corps were unable to provide us with plans of the numbers of positions for competition and projected savings for each major command through fiscal year 2003. The Navy and the Marine Corps are currently developing multiyear competitive sourcing implementation analysis by function and location. The Navy analysis for fiscal years 1999 to 2001 is scheduled to be completed by June 1999 and the Marine Corps plan for fiscal years 2000 to 2002 is expected to be completed by April 1999. According to service officials, some or all of the major commands were given numbers of positions to compete and savings goals, and it is up to them to determine how best to meet the goals. The Navy started developing a strategic plan for competitions in September 1997, about 2 years after the Chief of Naval Operations revitalized the Navy’s competition program. In response to a prior GAO recommendation, the Navy expects to develop a detailed 5-year plan as part of its overall strategy and expects the major commands to develop an execution plan. The Navy expects to have a reasonable and achievable strategic plan for competitions by early fiscal year 1999. The extent to which this strategic plan will be based on a detailed implementation analysis is unknown at this time. The Army published a competition strategy in September 1998 but has not conducted a detailed implementation analysis of the program to assess its executability. The strategy lays out a number of high-level goals and identifies ways to meet them. In implementing its strategy, the Army is placing primary responsibility for selecting and prioritizing functions and conducting competitions on the installation commanders. Army officials told us that each year, the major commands develop a plan that identifies the functions the commands will study at their installations that fiscal year. If the major commands do not achieve the programmed savings from competitions, they must achieve the savings through other efficiencies or local personnel management actions. The Army has also established a competitive sourcing and privatization Integrated Process Team and made it responsible for recommending a new management structure to oversee the program and changes to streamline processes. Team officials recommended that the Army develop competition plans for the fiscal year 2001 to 2005 time frame by April 1999. The recommendations were made on November 19, 1998, and are currently awaiting approval from the Army Vice Chief of Staff. The extent to which the competition plans will be based on a detailed implementation analysis is unknown at this time. OSD, on December 9, 1998, directed each component to develop multiyear competition plans consistent with and presented at the same time as their fiscal year 2001 to 2005 Program Objective Memorandum. OSD directed that these plans should include, by fiscal year, the functions and numbers of positions to be competed. DOD has established an ambitious competition program as a means of reducing its infrastructure support costs and increasing funding available for modernization and procurement. Establishing realistic competition and savings goals are key to achieving the program’s desired results. However, DOD’s savings projections have not adequately accounted for the costs of conducting the competitions. These costs could significantly reduce DOD’s expected level of savings in the short term. In addition, the planned competitions are likely to take longer than initially projected, further reducing the annual savings that will be realized. Consequently, the estimated savings between fiscal year 1997 and 2003 are overstated. The effects of failing to realize these annual savings could be significant, since DOD has already reduced future operating budget estimates to take into account the estimated savings. Also, the number of competitions DOD expects to complete over the next several years continues to increase, even as difficulties in meeting previous goals grow. Service officials are increasingly expressing concern about their ability to meet these targets, especially considering the unprecedented number of competitions that are planned to be ongoing simultaneously in the near future. Finally, we believe there is merit to this concern because most components lack detailed plans and analyses to help determine whether the numbers of positions to be competed would be practical. We recommend that the Secretary of Defense require the DOD components to assess to what extent available resources are sufficient to execute the numbers of planned competitions within the time frames envisioned and make such adjustments as needed to ensure adequate program execution. In doing so, we also recommend that the Secretary require the components to reexamine and adjust as necessary the competitive sourcing study targets, milestones, expected net short-term savings, and the planned operating budget reductions. In commenting on a draft of this report, DOD concurred with our conclusions and recommendations (see appendix III). However, the response also indicated that DOD does not believe that its components have completed enough studies since fiscal year 1997 to establish a baseline that would necessitate the reevaluation of competitive sourcing milestones and objectives at this time, as our report recommends. DOD noted that the Deputy Secretary of Defense had proposed a number of initiatives in a December 9, 1998, memorandum to the Defense components that will make better use of existing resources devoted to competitive sourcing studies. However, DOD did not indicate at what point it would establish a new baseline. We continue to believe that DOD has sufficient reason to reassess the competitive sourcing study targets, milestones, expected short-term savings and planned operating budget reductions now. The issues at hand involve more than the number of competitions completed, they also involve to what extent the planned announcements of competitions have occurred and whether there are sufficient resources to complete them. This is of concern especially given the large number of studies planned for announcement in fiscal years 1998 and 1999 and the delays encountered in getting the fiscal year 1998 studies underway. If similar delays are encountered in fiscal year 1999, they could seriously affect future program execution and DOD’s ability to achieve results in a timely manner. Accordingly, an important part of any reassessment should also include examining the components’ progress in developing detailed implementation plans; such plans will have a direct bearing on resource requirements. DOD also provided technical comments to the draft, which we have incorporated as appropriate. We are sending copies of this report to the Chairmen of the Senate Committees on Armed Services and on Appropriations and of the House Committees on Armed Services and on Appropriations; the Secretaries of Defense, the Army, the Air Force, and the Navy; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are in appendix IV. For this report, we (1) identified the Department of Defense’s (DOD) competitive sourcing study and savings goals, (2) assessed the accuracy of the savings estimates provided to Congress, and (3) evaluated the adequacy of program planning to support the overall program. To determine the Office of the Secretary of Defense’s (OSD) process for managing the A-76 savings targets, we met with representatives of the Defense Management Council, including OSD’s Office of Program Analysis and Evaluation and the A-76 Task Force in the Office of the Deputy Under Secretary of Defense for Industrial Affairs and Installations. To identify DOD’s competitive sourcing study and savings goals and assess how well investment costs are reflected in the savings estimates, we obtained and analyzed the planning assumptions each military service and OSD used. We did not use the budget justification material on competitive sourcing contained in the 1999 to 2003 Future Years Defense Plan (FYDP) because of certain limitations in the data. Instead, we obtained comprehensive information from each service on the numbers of positions planned for study that served as the basis for projected savings between 1997 and 2003, which included studies in one service that began as early as 1993. Therefore, while the 1999 FYDP material lists studies of approximately 214,000 positions, our discussions with DOD components identified over 225,000 positions either already competed, under study, or planned for competition. We also obtained information on unrecognized costs, such as separation costs, from the Air Force’s and the Navy’s comptroller’s offices, as well as the Army’s Office of Program Analysis and Evaluation. We did not determine the reliability of the cost information provided by these offices. We met with officials from the Center for Naval Analyses to discuss their work on competitive sourcing within DOD, and obtained copies of their reports. We also spoke with responsible OSD, service, and installation officials, including manpower, contracting, and financial management officials to obtain information on the personnel resources required to conduct the studies and their ongoing efforts to reform their commercial activities databases. To determine the extent to which uncertainties exist about meeting study goals and savings targets in the projected time frames, we met with responsible officials from OSD, the Army, the Air Force, the Navy, and the Marine Corps and contacted officials from defense agencies and installations. We obtained documentation on past, ongoing, and planned A-76 studies. To evaluate the adequacy of the advance planning to support the effort underway, we met with representatives of the Defense Management Council and the A-76 Task Force, as well as cognizant service officials, to discuss their oversight role and the program implementation risks. We also researched relevant laws cited by officials. We performed much of our work in Washington, D.C. However, we also conducted work at the Air Force Air Education and Training Command, San Antonio, Texas; the U.S. Army Training and Doctrine Command, Fort Monroe, Virginia; and the Naval Air Systems Command, Patuxet River, Maryland. We conducted our review from September 1997 to December 1998 in accordance with generally accepted government auditing standards. In general, the A-76 process consists of six key activities—(1) developing a performance work statement and quality assurance surveillance plan; (2) conducting a management study to determine the government’s most efficient organization (MEO); (3) developing an in-house government cost estimate for MEO; (4) issuing a Request for Proposals or Invitation for Bid; (5) evaluating the proposals or bids and comparing the in-house estimate with a private sector offer or interservice support agreement and selecting the winner of the cost comparison; and (6) addressing any appeals submitted under the administrative appeals process, which is designed to ensure that all costs are fair, accurate, and calculated in the manner prescribed by the A-76 handbook. Figure II.1 shows an overview of the process. The solid lines indicate the process used when the government issues an Invitation for Bids, requesting firm bids on the cost of performing a commercial activity. This type of process is normally used for more routine commercial activities, such as grass-cutting or cafeteria operations, where the work process and requirements are well defined. The dotted lines indicate the additional steps that take place when the government wants to pursue a negotiated, “best value” procurement. While it may not be appropriate for use in all cases, this type of process is often used when the commercial activity involves high levels of complexity, expertise, and risk. Most Efficient Organization (MEO) activities Additional steps required for request for proposals (RFP) The circular requires the government to develop a performance work statement. This statement, which is incorporated into either the Invitation for Bids or Request for Proposals, serves as the basis for both government estimates and private sector offers. If the Invitation for Bids process is used, each private sector company develops and submits a bid, giving its firm price for performing the commercial activity. While this process is taking place, the government activity performs a management study to determine the most efficient and effective way of performing the activity with in-house staff. Based on this “most efficient organization,” the government develops a cost estimate and submits it to the selecting authority. The selecting authority concurrently opens the government’s estimate along with the bids of all private sector firms. According to OMB’s A-76 guidance, the government’s in-house estimate wins the competition unless the private sector’s offer meets a threshold of savings that is at least 10 percent of direct personnel costs or $10 million over the performance period. This minimum cost differential was established by the Office of Management and Budget (OMB) to ensure that the government would not contract out for marginal estimated savings. If the Request for Proposals—best value process—is used, the Federal Procurement Regulation and the A-76 supplemental handbook require several additional steps. The private sector offerors submit proposals that often include a technical performance proposal and a price. The government prepares an in-house management plan and cost estimate based strictly on the performance work statement. On the other hand, private sector proposals can offer a higher level of performance or service. The government’s selection authority reviews the private sector proposals to determine which one represents the best overall value to the government based on such considerations as (1) higher performance levels, (2) lower proposal risk, (3) better past performance, and (4) cost to do the work. After the completion of this analysis, the selection authority prepares a written justification supporting its decision. This includes the basis for selecting a contractor other than the one that offered the lowest price to the government. Next, the authority evaluates the government’s offer and determines whether it can achieve the same level of performance and quality as the selected private sector proposal. If not, the government must then make changes to meet the performance standards accepted by the authority. This ensures that the in-house cost estimate is based upon the same scope of work and performance levels as the best value private sector offer. After determining that the offers are based on the same level of performance, the cost estimates are compared. As with the Invitation for Bids process, the work will remain in-house unless the private offer is (1) 10 percent less in direct personnel costs or (2) $10 million less over the performance period. Participants in the process—for either the Invitation for Bids or Request for Proposals process—may appeal the selection authority’s decision if they believe the costs submitted by one or more of the participants were not fair, accurate, or calculated in the manner prescribed by the A-76 handbook. OMB Circular A-76: Oversight and Implementation Issues (GAO/T-GGD-98-146, June 4, 1998). Quadrennial Defense Review: Some Personnel Cuts and Associated Savings May Not Be Achieved (GAO/NSIAD-98-100, Apr. 30, 1998). Competitive Contracting: Information Related to the Redrafts of the Freedom From Government Competition Act (GAO/GGD/NSIAD-98-167R, Apr. 27, 1998). Defense Outsourcing: Impact on Navy Sea-Shore Rotations (GAO/NSIAD-98-107, Apr. 21, 1998). Defense Infrastructure: Challenges Facing DOD in Implementing Defense Reform Initiatives (GAO/T-NSIAD-98-115, Mar. 18, 1998). Defense Management: Challenges Facing DOD in Implementing Defense Reform Initiatives (GAO/T-NSIAD/AIMD-98-122, Mar. 13, 1998). Base Operations: DOD’s Use of Single Contracts for Multiple Support Services (GAO/NSIAD-98-82, Feb. 27, 1998). Defence Outsourcing: Better Data Needed to Support Overhead Rates for A-76 Studies (GAO/NSIAD-98-62, Feb. 27, 1998). Outsourcing DOD Logistics: Savings Achievable But Defense Science Board’s Projections Are Overstated (GAO/NSIAD-98-48, Dec. 8, 1997). Financial Management: Outsourcing of Finance and Accounting Functions (GAO/AIMD/NSIAD-98-43, Oct. 17, 1997). Base Operations: Contracting for Firefighters and Security Guards (GAO/NSIAD-97-200BR, Sept. 12, 1997). Terms Related to Privatization Activities and Processes (GAO/GGD-97-121, July 1997). Defense Outsourcing: Challenges Facing DOD as It Attempts to Save Billions in Infrastructure Costs (GAO/T-NSIAD-97-110, Mar. 12, 1997). Base Operations: Challenges Confronting DOD as It Renews Emphasis on Outsourcing (GAO/NSIAD-97-86, Mar. 11, 1997). Public-Private Mix: Effectiveness and Performance of GSA’s In-House and Contracted Services (GAO/GGD-95-204, Sept. 29, 1995). Government Contractors: An Overview of the Federal Contracting-Out Program (GAO/T-GGD-95-131, Mar. 29, 1995). Government Contractors: Are Service Contractors Performing Inherently Governmental Functions (GAO/GGD-92-11, Nov. 18, 1991). OMB Circular A-76: Legislation Has Curbed Many Cost Studies in Military Services (GAO/GGD-91-100, July 30, 1991). OMB Circular A-76: DOD’s Reported Savings Figures Are Incomplete and Inaccurate (GAO/GGD-90-58, Mar. 15, 1990). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) use Office of Management and Budget Circular A-76 as a means of realizing an estimated $6 billion savings in support costs between fiscal years (FY) 1997 and 2003, focusing on: (1) identifying the competition and savings goals; (2) assessing the accuracy of the savings estimates provided to Congress; and (3) evaluating the adequacy of planning to support the overall program. GAO noted that: (1) DOD has underway an unprecedented program to use competitions to gain economies and efficiencies in its operations and to reduce support costs; (2) while the numbers have evolved over time, as of now, DOD is planning to open over 229,000 government positions to competition within the public and private sectors over the next several years; (3) it estimates $6 billion cumulative savings between FY 1997 and FY 2003, and $2.3 billion in recurring savings each year thereafter, as a result of these efforts; (4) however, estimates of competitive savings provided to Congress in FY 1998 are overstated, and several issues are likely to reduce the estimated savings, at least in the short-term; (5) DOD has not fully calculated either the investment costs associated with undertaking these competitions or the personnel separation costs likely to be associated with implementing them; (6) further, there are numerous indications that DOD components have already begun to experience difficulties in launching and completing the competitions within the timeframes they initially projected; (7) as a result, the achievement of savings may be delayed; (8) various officials have expressed concern about the effects of not achieving the expected savings because reductions in future operating budgets have already been planned in anticipation of these savings; (9) comprehensive planning to identify specific functions and locations for competition among the services has been limited; (10) within individual military services, it has largely been up to individual installations or major commands to identify and prioritize specific activities and functions for study and to conduct competitions; and (11) the one service that has carried out a comprehensive assessment, the Air Force, has identified a potential shortfall in viable candidates for competition. |
Satellites provide many significant services, including communication, navigation, remote sensing, imaging, and weather and meteorological support. Satellites support direct radio communication and provide television broadcast and cable relay services, as well as home reception. Satellite services also support applications such as mobile and cellular communication, telemedicine, cargo tracking, point-of-sale transactions, and Internet access. Satellites also provide redundancy and backup capabilities to ground-based communications, as was demonstrated after the events of September 11, 2001, when satellites provided critical communications while ground-based lines were unavailable. The commercial satellite industry includes manufacturers, the launch industry, service providers, and ground equipment manufacturers. Manufacturers design and build satellites, supporting systems, and ground stations. The launch industry uses launch vehicles, powered by rocket engines, to place satellites in orbit. Once commercial satellites are in orbit, they are operated by service providers, who lease available services. Commercial satellite service clients include telecommunication companies, television networks, financial institutions, major retailers, Internet service providers, and governments. Some companies resell leased satellite services to their clients. For example, major telecommunication companies sometimes include satellite services in their product line. Ground equipment manufacturers build and sell the items needed to use satellite services, such as ground station hardware (antennas), data terminals, mobile terminals (truck-mounted units), and consumer electronics (satellite phones). For the year 2000, the commercial satellite industry generated revenues of $85.1 billion: $17.2 billion for satellite manufacturing, $8.5 billion for the launch industry, $41.7 billion for satellite services, and $17.7 billion for ground equipment manufacturing, according to an industry association. Federal agencies also own and operate satellites. For example, the U.S. military and intelligence communities have satellites to provide capabilities for reconnaissance, surveillance, early warning of missile launches, weather forecasts, navigation, and communications. In addition, some federal civilian agencies own satellites that are used for communications, scientific studies, and weather forecasting. Further, federal agencies use commercial satellites for services such as communications, data transmission, and remote sensing. For example, DOD typically relies on commercial satellites to fulfill its communications and information transmission requirements for non–mission-critical data and to augment its military satellite capabilities. The National Defense Industrial Association (NDIA) reported in December 1998 that the government’s overall use of commercial satellites for communications and remote sensing is expected to grow significantly because of increased communications requirements. According to a DOD official, the department’s reliance on commercial satellites is expected to grow through 2020. After 2020, DOD officials anticipate that commercial satellites will provide only surge capacity, as additional military satellites are expected to be operational. In addition to the U.S. military, several civilian government agencies also rely on commercial satellite systems. Table 1 provides brief descriptions of the use of commercial satellites by four civilian agencies included in our review. Collectively, the federal government does not dominate the commercial satellite market. According to commercial satellite industry officials, the revenue provided to the satellite industry by the federal government represents about 10 percent of the commercial satellite market. However, the importance of commercial satellites for government operations is evident during times of conflict. For example, according to a DOD study, commercial communications satellites were used in 45 percent of all communications between the United States and the Persian Gulf region during Desert Shield/Desert Storm. Further, during operations in Somalia from December 1992 through March 1994, U.S. military and commercial satellite coverage was not available, so Russian commercial satellites were used. DOD currently reports approximately 50 percent reliance on commercial satellites for wideband services, which are leased through the Defense Information Systems Agency’s Commercial Satellite Communications Branch. The commercial satellite industry is a global industry that includes many foreign-owned corporations as well as partnerships between U.S. and foreign corporations. As a result, the U.S. government depends on foreign and international companies. For example, some commercial space systems of foreign origin are used by the U.S. military for imagery and communications support. NDIA reported that foreign ownership of satellites is expected to grow and predicted that by 2010, 80 percent of commercial communication satellite services could be provided by foreign- owned companies. This globalization of the satellite industry could affect the availability of commercial satellite systems to U.S. government or commercial entities through frequency allocations, tariffs, politics, and international law. A satellite system consists of ground stations, tracking and control links (commonly referred to as the tracking, telemetry, and control (TT&C) links) and data links, and satellites. Figure 1 illustrates the basic satellite system components. As the figure shows, two kinds of ground stations are associated with satellites: control stations and communications stations. Control stations perform tracking and control functions to ensure that satellites remain in the proper orbits (commonly referred to by the industry as “station keeping”) and to monitor their performance. Communications ground stations process imagery, voice, or other data and provide, in many cases, a link to ground-based terrestrial network interconnections. The links between the two types of ground stations and the satellites are referred to by their function: TT&C and data links. TT&C links exchange commands and status information between control ground stations and satellites. Data links exchange communications, navigation, and imaging data between communications ground stations and satellites. As shown in figure 1, links are also distinguished by the direction of transmission: uplinks go from Earth to space, and downlinks from space to Earth. Satellites can also communicate with each other; these links are referred to as cross-links. The final component of the system is the satellite. Every satellite has a “payload” and a “bus.” The payload contains all the equipment a satellite needs to perform its function, and it differs for every type of satellite. For example, the payload for a weather satellite includes cameras to take pictures of cloud formations, while the payload for a communications satellite includes transponders to relay data (for example, television or telephone signals). The bus carries the payload and additional equipment into space and provides electrical power, computers, and propulsion to the entire spacecraft. A satellite can serve simply as a relay between a source and a destination (for example, a communications satellite), or it can perform processing of data and communicate the data to a communications ground station (for example, an imaging satellite). Satellite systems face unintentional threats to all parts of the system; such threats can be ground-based, space-based, and interference-oriented. The probability of these threats occurring and the difficulty of exploiting these vulnerabilities vary. Table 2 displays some of these threats and the vulnerable components. Ground stations are vulnerable to damage or destruction by natural terrestrial threats such as earthquakes, floods, thunderstorms, lightning, dust storms, heavy snows, tropical storms, tornadoes, corrosive sea spray, and salt air. In addition, they could also be affected by natural conditions and environmental hazards, such as air pollution and adverse temperature environments, as well as power outages. Satellites are physically vulnerable to space-based environmental anomalies resulting from natural conditions and man-made artifacts. Space-based threats include solar and cosmic radiation and related phenomena, solar disturbances, temperature variations, and natural objects (meteoroids and asteroids). In addition, the growing number of satellites is contributing to the problem of space “junk” (spacecraft and debris). As of May 2002, DOD identified over 9,000 man-made objects in space, including active satellites. As additional satellites are developed and deployed, DOD officials stated that the threat of collisions caused by the proliferation of satellites and accompanying debris could increase. Links are vulnerable both to natural conditions (in space and in the atmosphere) and to congestion. Links can be severely degraded by the effects of solar activity and atmospheric and solar disturbances. Both orbital and spectral congestion are a threat to links (as well as to satellites). Such congestion may restrict the future use of potential orbits and frequencies and cause unintentional interference to satellite services. According to one commercial service provider, satellite service providers worldwide work together to resolve interference problems, which are common. In addition, commercial satellite interference is regulated both internationally and nationally. The International Telecommunication Union specifies interference resolution policies and procedures, including those for harmful interference. Further, within the United States, the Federal Communications Commission (FCC) has the capability to track the location of interference, at a service provider’s request. Also, service providers told us that they could locate and identify unintentional or unauthorized users through a technique called triangulation. Once an unauthorized user is located, a commercial service provider can jam that user’s signal if the user cannot be persuaded to stop using the satellite. However, according to industry officials, typically an unauthorized user would be identified, located, and contacted through a combination of industry and government resources before such jamming would be needed. In addition, satellite systems are vulnerable to many forms of intentional human attacks that are intended to destroy ground stations and satellites or interfere with the TT&C links, data links, and cross-links. According to DOD and the private sector, the probability of these threats occurring and the difficulty of exploiting these vulnerabilities vary. Table 3 shows some of these intentional threats. All types of ground stations are potentially vulnerable to threats of physical attack and sabotage. These threats could target all satellite ground components, including launch facilities, command and control facilities, and supporting infrastructures. Space-based threats to satellites are proliferating as a result of the growing availability of technology around the world. According to DOD, potential space-based weapons include interceptors, such as space mines and orbiting space-to-space missiles, and directed-energy weapons. Directed- energy weapons include ground-based, airborne, and space-based weapons that use laser energy to damage or destroy satellite services, and nuclear weapons that generate nuclear radiation and electronic pulses, resulting in direct damage to the orbital electronics by the primary and secondary effects of a detonation. Ground stations, links, and supporting communications networks are all vulnerable to cyber attacks. Potential cyber attacks include denial of service, malicious software, unauthorized monitoring and disclosure of sensitive information (data interception), injection of fake signals or traffic (“spoofing”), and unauthorized modification or deliberate corruption of network information, services, and databases. For example, malicious software (such as computer viruses) can be (1) implanted into computer systems during development or inserted during operations; (2) used to manipulate network protocols, deny data or service, destroy data or software, and corrupt, modify, or compromise data; and (3) used to attack processor-controlled transmission equipment, control systems, or the information being passed. Links are particularly susceptible to electronic interference threats capable of disrupting or denying satellite communications. These threats include spoofing and jamming. A spoofer emits false, but plausible, signals for deception purposes. If false commands could be inserted into a satellite’s command receiver (spoofing the receiver), they could cause the spacecraft to tumble or otherwise destroy itself. It is also feasible to insert false information or computer viruses into the terrestrial computer networks associated with a space system, either remotely or through an on-site connection. Such an attack could lead to space system degradation or even complete loss of spacecraft utility. A jammer emits noise-like signals in an effort to mask or prevent the reception of desired signals and can be used to disrupt uplinks, downlinks, and cross-links. An uplink jammer attempts to inject noise or some other signal into the targeted satellites’ uplink receivers. In general, an uplink jammer must be roughly as powerful as the emitter associated with the link being jammed. Downlink jamming attempts to inject noise or some other signal directly into earth terminal receivers. The targets of downlink jammers are ground- based satellite data receivers, ranging from large fixed ground sites to handheld Global Positioning System (GPS) user terminals. Since downlink jammers have a range advantage over the space-based emitters, they can often be much less powerful. Downlink jamming is generally easier to accomplish than uplink jamming, since very low-power jammers are often suitable. Since a downlink may be received by multiple earth terminals, it is often more difficult to jam more than a few earth terminals through downlink jamming than through uplink jamming, especially if the receiver terminals are dispersed across a significant geographical area. A cross-link jammer attempts to inject noise or some other signal between two satellites communicating directly with each other. Because it is considered the most complex and difficult approach to satellite jamming, according to a DOD document, cross-link jamming is considered a lower probability threat than uplink and downlink jamming. Satellite services have been disrupted or denied as a result of system vulnerabilities. Below is a list of satellite-related incidents that have been publicly reported in which services were interrupted unintentionally or intentionally because of satellites’ vulnerabilities to jamming and equipment failure: In April 1986, an insider, working alone under the name “Captain Midnight” at a commercial satellite transmission center in central Florida, succeeded in disrupting a cable network’s eastern uplink feed to the Galaxy I satellite. Although this event was a minor annoyance, it had the potential for disrupting services to satellite users. Starting in 1995, MED-TV, a Kurdish satellite channel, was intentionally jammed (and eventually had its license revoked) because its broadcasts promoted terrorism and violence. In 1997, while a GPS transmitter was being tested on the ground, it unintentionally interfered with the GPS receivers of a commercial aircraft in the area. The plane temporarily lost all of its GPS information. In 1997, Indonesia intentionally interfered with and denied the services of a commercial satellite belonging to the South Pacific island kingdom of Tonga because of a satellite orbital slot dispute. In 1998, the failure of PANAMSAT’s Galaxy IV satellite, attributable to an on-board processor anomaly, disabled 80 to 90 percent of 45 million pagers across the United States for 2 to 4 days, leaving approximately 70 percent of a major oil company’s customers without the ability to pay for services at the pump. Recognizing that our nation’s critical infrastructures, including telecommunications, energy, banking and finance, transportation, and satellites, are the foundation of our economy, national security, and quality of life, in October 1997 the President’s Commission on Critical Infrastructure Protection issued a report recommending several measures to achieve a higher level of protection of critical infrastructures. These measures included industry cooperation and information sharing, the creation of a national organization structure, a revised program of research and development, a broad program of awareness and education, and reconsideration of laws related to infrastructure protection. The report also described the potentially devastating implications of poor information security from a national perspective. The report stated that a comprehensive effort would need to “include a system of surveillance, assessment, early warning, and response mechanisms to mitigate the potential for cyber threats.” Presidential Decision Directive (PDD) 63, issued in 1998 to improve the federal government’s approach to critical infrastructure protection (CIP), describes a strategy for cooperative efforts by government and the private sector to protect critical computer-dependent operations. The directive called on the federal government to serve as a model of how infrastructure assurance is best achieved, and it designated lead agencies to work with private-sector and government entities. To accomplish its goals, PDD 63 designated and established organizations to provide central coordination and support, including the Critical Infrastructure Assurance Office (CIAO), an interagency office that is housed in the Department of Commerce, which was established to develop a national plan for CIP on the basis of infrastructure plans developed by the private sector and federal agencies; and the National Infrastructure Protection Center, an organization within the FBI, which was expanded to address national-level threat assessment, warning, vulnerability, and law enforcement investigation and response. To ensure coverage of critical sectors, PDD 63 also identified eight private- sector infrastructures and five special functions; information and communication is one of the eight infrastructures identified. Further, the directive designated lead federal agencies to work with the private-sector entities. For example, Commerce is the lead agency for the information and communication sector (the responsible organization within Commerce is the National Telecommunications and Information Administration), and the Department of Energy is the lead agency for the electrical power industry. Similarly, for special function areas, DOD is responsible for national defense, and the Department of State is responsible for foreign affairs. To facilitate private-sector participation, PDD 63 also encouraged creation of information sharing and analysis centers (ISACs) that could serve as a mechanism for gathering, analyzing, and appropriately sanitizing and disseminating information to and from infrastructure sectors and the federal government through the FBI’s National Infrastructure Protection Center. Although most of the ISACs are operated by private-sector organizations, the telecommunications ISAC is operated by a government entity, the National Coordinating Center for Telecommunications (NCC), which is part of the National Communications System. In September 2001, we reported that six ISACs within five infrastructures had been established to gather and share information about vulnerabilities, attempted intrusions, and attacks within their respective infrastructure sectors and to meet specific sector objectives. In addition, at that time, we reported that the formation of at least three more ISACs for various infrastructure sectors was being discussed. Figure 2 displays a high-level overview of several organizations with CIP responsibilities, as outlined by PDD 63. The most recent federal cyber CIP guidance was issued in October 2001, when President Bush signed Executive Order 13231, Critical Infrastructure Protection in the Information Age, which continues many PDD 63 activities by focusing on cyber threats to critical infrastructures and creating the President’s Board on CIP to coordinate cyber-related federal efforts. The Special Advisor to the President for Cyberspace Security chairs the board. In July 2002, the President issued a national strategy for homeland security that identifies 14 industry sectors, including the 8 identified in PDD 63. The additional 6 are agriculture, food, defense industrial base, chemical industry and hazardous materials, postal and shipping, and national monuments and icons. The U.S. national space policy provides goals and guidelines for the U.S. space program, including the use of commercial satellites. In February 1991, the President issued National Space Policy Directive 3, which requires U.S. government agencies to use commercially available space products and services to the fullest extent feasible. Presidential Decision Directive 49, dated September 19, 1996, provides goals for the U.S. space program and establishes space guidelines. For example, a guideline regarding the commercial space industry stated that U.S. government agencies shall purchase commercially available space goods and services to the fullest extent feasible, and that, except for reasons of national security or public safety, they shall not conduct activities with commercial applications that preclude or deter commercial space activities. Neither the National Space Policy Directive 3 nor PDD 49 specifically addresses the security of satellite systems used by federal agencies. However, PDD 49 states that critical capabilities necessary for executing space missions must be ensured. Security of satellite systems has been addressed in policy documents issued by the National Security Telecommunications and Information Systems Security Committee (recently renamed the Committee on National Security Systems). The initial policy was set forth in National Policy on Application of Communications Security to U.S. Civil and Commercial Space Systems, National Telecommunications and Information Systems Security Policy (NTISSP) No. 1 (June 17, 1985), which governed the protection of command and control uplinks for government-used satellites other than military. This policy, which applies to space systems launched 5 years from the policy date (June 17, 1985), limits government and government contractor use of U.S. civil and commercial satellites to those systems using accepted techniques to protect the command and control uplinks. In January 2001, a new policy governing satellite system security was issued, superseding NTISSP No. 1: National Information Assurance (IA) Policy for U.S. Space Systems, National Security Telecommunications and Information Systems Security Policy (NSTISSP) No. 12. NSTISSP No. 12, which focuses on systems used for U.S. national security information, aims to ensure that information assurance is factored into “the planning, design, launch, sustained operation, and deactivation of federal and commercial space systems used to collect, generate, process, store, display, or transmit and receive such information.” The policy also includes a provision addressing commercial imagery satellites that may be used to satisfy national security requirements during periods of conflict or war. The policy states that approved U.S. cryptographies shall be used to provide confidentiality for (1) command and control uplinks, (2) data links that transmit national security information between the ground and the space platforms, (3) cross-links between space platforms, and (4) downlinks from space platforms to mission ground or processing centers. A range of security techniques is available for protecting satellite systems: for example, using encryption on TT&C and data links, using robust parts on the satellites, and applying physical and cyber security controls at the ground stations. The application of these techniques varies across federal agencies and the private sector. Commercial satellite service providers typically use some of these security techniques to meet most of their customers’ security requirements, and they base their decisions on business objectives. Generally, the military applies more stringent security techniques to their satellites than do civilian agencies or the private sector. Table 4 provides an overview of security techniques by satellite system component. Techniques to protect satellite links include the use of encryption, high- power radio frequency (RF) uplinks, spread spectrum communications, and a digital interface unique to each satellite. Commercial satellite service providers, federal satellite owners and operators, and customers stated that they typically use at least one of these techniques. Usually, only the military uses spread spectrum techniques. Both TT&C and data links can be protected by encryption: generally, for TT&C links, the tracking and control uplink is encrypted, while the telemetry downlink is not. Encryption is the transformation of ordinary data (commonly referred to as plaintext) into a code form (ciphertext) and back into plaintext, using a mathematical process called an algorithm. Encryption can be used on data to (1) hide information content, (2) prevent undetected modification, and (3) prevent unauthorized use. Different levels of encryption provide different levels of protection, including encryption approved by the National Security Agency (NSA) that is used for national security information. NSTISSP No. 12 requires approved U.S. cryptographies on TT&C and data links for U.S. space systems transmitting national security information. For satellite systems transmitting non–national-security information, there is no policy that security is required for the links, but satellite service providers and federal satellite owners and operators included in our review stated that they protect tracking and control uplinks with encryption. However, NSA officials stated that not all commercial providers’ tracking and control uplinks are encrypted. Concerning the data links, customers are responsible for determining whether they are encrypted or not. Most commercial satellite systems are designed for “open access,” meaning that a transmitted signal is broadcast universally and unprotected. A second security technique for links is the use of high-power RF uplinks: that is, a large antenna used to send a high-power signal from the ground station to the satellite. To intentionally interfere with a satellite’s links, an attacker would need a large antenna with a powerful radio transmitter (as well as considerable technical knowledge). Two of the commercial providers we talked to stated that they use high-power RF uplinks as part of their satellite security approach. According to one commercial provider, most satellite operators use high-power RF uplinks for TT&C connections to block potential unauthorized users’ attempts to interfere with or jam the TT&C uplink. A third technique for protecting links is the use of spread spectrum communication, a technique used by the military and not normally implemented by commercial providers. Spread spectrum communication is a form of wireless communication in which the frequency of the transmitted signal is deliberately varied and spread over a wide frequency band. Because the frequency of the transmitted signal is deliberately varied, spread spectrum communication can provide security to links because it increases the power required to jam the signals even if they are detected. Spread spectrum communication is primarily used to optimize the efficiency of bandwidth within a frequency range, but it also provides security benefits. Finally, TT&C links can be protected by the use of a unique digital interface between the ground station and the satellite. According to one commercial satellite service provider, most commercial providers use a unique digital interface with each satellite. Tracking and control instructions sent from the ground station to the satellite are encoded and formatted in a way that is not publicly known. Officials from the commercial satellite vendor stated that even if an attacker were successful in hacking one satellite, the unique interface could prevent the attacker from taking control of an entire fleet of satellites. In addition, communication with the digital interface to the tracking and control links requires high transmission power, so that an attacker would need a large, powerful antenna. Satellites can be protected by (1) “hardening,” through designs and components that are built to be robust enough to withstand harsh space environments and deliberate attacks, and (2) the use of redundancy— backup systems and components. Commercial satellite service providers and federal civilian owners and operators told us that they do not harden their satellites to the extent that the military does. Commercial providers, federal civilian owners and operators, and the military use varying degrees of redundancy to protect their satellites. As satellites rely increasingly on on-board information processing, hardening is becoming more important as a security technique. Hardening in this context includes physical hardening and electronic-component hardening. Satellites can be hardened against natural environmental conditions and deliberate attack, and to ensure survivability. Most hardening efforts are focused on providing sufficient protection to electronic components in satellites so that they can withstand natural environmental conditions over the expected lifespan of the satellite, which could be nearly 15 years. For hardening against deliberate attacks, some techniques proposed include the use of reflective surfaces, shutters, and nonabsorbing materials. According to commercial satellite providers, commercial satellites are not normally hardened against non-natural nuclear radiation because it is too costly. The drawback of hardening is the cost and the manufacturing and operational burdens that it imposes on satellite manufacturers and providers. The use of high-quality space parts is another approach to hardening. Although all parts used in satellites are designed to withstand natural environmental conditions, some very high-quality parts that have undergone rigorous testing and have appreciably higher hardness than standard space parts are also available, including those referred to as class “S” parts. These higher quality space parts cost significantly more than regular space parts—partly because of the significant testing procedures and more limited number of commercial providers manufacturing hardened parts. According to an industry official, high-quality space parts are used by the military and are generally not used on commercial satellites. Commercial satellite providers stated that they also use redundancy to ensure availability, through backup satellites and redundant features on individual satellites. Backup satellites enable an organization to continue operations if a primary satellite fails. One provider stated that it would rather spend resources on backup satellites than on hardening future satellites or encrypting the TT&C and data links. The provider also expressed the view that a greater number of smaller, less costly satellites provides greater reliability than is provided by few large satellites, because there is more redundancy. According to an industry consulting group, backup satellites, which include in-orbit and on-ground satellites, are part of commercial satellite providers’ security approaches. When backup satellites are used, they are commonly kept in orbit; keeping backup satellites on the ground is possible, but it has the disadvantage that the system cannot immediately continue operations if the primary satellite fails. According to one provider, it could take 4 to 6 months to launch a backup satellite stored on the ground. In addition, individual satellites can be designed to have redundant parts. For example, a commercial satellite provider told us that redundant processors, antennas, control systems, transponders, and other equipment are frequently used to ensure satellite survivability. Another example is that satellites could have two completely separate sets of hardware and two paths for software and information; this is referred to as having an A-side and a B-side. In general, this technique is not used on commercial satellites, according to an industry official. Techniques to protect ground stations include physical controls as well as logical security controls, hardening, and backup ground stations. Ground stations are important because they control the satellite and receive and process data. One provider stated that providing physical security measures to ground stations is important because the greatest security threat to satellite systems exists at that location. Locations of ground stations are usually known and accessible; thus, they require physical security controls such as fencing, guards, and internal security. One provider emphasized the importance of performing background checks on employees. Civilian agencies also stated that they protected ground stations through various physical security controls: ground stations are fenced, guarded, and secured inside with access control devices, such as key cards. The commercial satellite service providers included in our review stated that they did not protect their ground stations through hardening; this technique is primarily used by the military. Similarly, most civilian agencies we talked to do not harden their ground stations. A ground station would be considered hardened if it had protective measures to enable it to withstand destructive forces such as explosions, natural disasters, or ionizing radiation. Commercial satellite providers and federal agency satellite owners and operators also may maintain off-line or fully redundant ground stations to ensure availability, which can be used if the primary ground station is disrupted or destroyed. Off-line backup ground stations may not be staffed or managed by the same company, or on a full-time basis. In addition, off- line backup ground stations are not necessarily designed for long-term control of satellites. On the other hand, one commercial service provider stated that it maintained fully redundant, co-primary, geographically separated ground stations that are fully staffed with trained operators, gated with restricted access, and capable of long-term uninterruptible power. In addition, these ground stations periodically alternated which satellites they were responsible for as a training exercise. They also operated 24 hours a day, 7 days a week, and monitored each other. To mitigate the risk associated with using commercial satellites, federal agencies focus on areas within their responsibility and control: data links and communication ground stations. According to federal agency officials, agencies reduce risks associated with using commercial satellites by (1) protecting the data’s authentication and confidentiality with encryption, (2) securing the data ground stations with physical security controls and backup sites, and (3) ensuring service availability through redundancy and dedicated services. Federal agencies rely on commercial satellite service providers to provide the security techniques for the TT&C links, satellites, and satellite control stations. However, federal agency officials stated that they were unable to impose specific security requirements on commercial satellite service providers. Further, federal policy governing the security of satellite systems used by agencies is limited because it addresses only those satellites used for national security information, pertains only to techniques associated with the links between ground stations and satellites and between satellites (cross-links), and does not have an enforcement mechanism. Without appropriate governmentwide policy to address the security of all satellite components and of non–national-security information, federal agencies may not, for information with similar sensitivity and criticality, consistently (1) secure data links and communication ground stations or (2) use satellites that have certain security controls that enhance availability. Recent initiatives by the Executive Branch have acknowledged these policy limitations, but we are not aware of specific actions to address them. For critical data, agencies primarily use different types of encryption to reduce the risk of unauthorized use or changes. For example, the military services use encryption to protect most data communicated over satellites—either commercially owned or military. DOD officials stated that the military services use the strongest encryption algorithms available from the NSA for the most sensitive information—national security information. For non–national-security information, the military services use less strong encryption algorithms, according to DOD officials. The National Aeronautics and Space Administration (NASA) also uses NSA-provided encryption for critical operations, such as human mission communications (that is, for space shuttle missions). Using NSA encryption requires encryption and decryption hardware at the data’s source and destination, respectively. The use of this hardware requires agencies and satellite service providers to apply special physical protection procedures—such as restricting access to the equipment and allowing no access by foreign nationals. For the next generation of government-owned weather satellites, the National Oceanic and Atmospheric Administration (NOAA) and the U.S. military plan to use an NSA-approved commercial encryption package that will avoid the need for special equipment and allow them to restrict the data to authorized users with user IDs and passwords. In addition, NOAA will be able to encrypt broadcast weather data over particular regions of the world. According to NASA and NOAA officials, some agency data do not require protection because the risk of unauthorized use or changes is not significant or because the information is intended to be available to a broad audience. For example, NASA uses satellites to provide large bandwidth to transmit scientific data from remote locations. According to NASA officials, the agency does not protect the transmission of these data because they are considered academic in nature and low risk. In addition, the Federal Aviation Administration (FAA) does not encrypt links between control centers or between control centers and aircraft, because the data on these links go from specific air traffic control centers to specific aircraft. According to FAA officials, if the transmissions were required to be encrypted, every aircraft would have to acquire costly decryption equipment. Further, according to National Weather Service officials, the service does not protect the weather data transmitted over commercial satellites because the service considers it important to make this information widely available not only to its sites but also to government agencies, commercial partners, universities, and others with the appropriate equipment. Federal agencies also control the security of the data ground stations that send and receive data over satellites. To protect these ground stations, federal officials stated that they use physical security techniques, such as those discussed earlier. They protect their facilities and equipment from unintentional and intentional threats (such as wind, snow, and vandalism). For example, according to FAA officials, in certain locations, FAA has hardened remote satellite ground stations against high wind and cold weather conditions. In addition, NOAA officials stated that many of their antennas are hurricane protected. Further, federal officials stated that they perform background checks on personnel. NOAA officials stated that they perform background checks on satellite technicians to the secret clearance level. Federal officials also stated that their ground stations are further protected because they are located on large, protected federal facilities. For example, military ground stations can be located on protected U.S. or allied military bases. Also, National Weather Service officials stated that the service’s primary communications uplink is located on a highly secured federal site. Further, according to DOD officials, personnel are expected to protect the satellite equipment provided to them in the field. Agencies also had backup communications sites that were geographically separated, including being on different power grids. For example, according to an official, the National Weather Service’s planned backup communications uplink site will be geographically separated from the primary site and will be on a secured federal site. Federal agencies also reduce the risk associated with using commercial satellites by having redundant telecommunications capabilities. For example, for the program that provides Alaska’s air traffic control, FAA relies on two satellites to provide backup capacity for each other. In addition to this redundancy, FAA has requested its commercial satellite service provider to preferentially provide services to FAA’s Alaska air traffic control system over other customers carried on the same satellites. Another FAA program provides primary communications capabilities in remote locations and has redundant satellite capacity that can be used if the primary satellite fails. The National Weather Service is another example. The service uses redundancy to ensure the availability of satellite services that broadcast weather data to its 160 locations by contracting for priority services that include guarantees of additional transponders or, if the satellite fails, of services on other satellites. In addition, the service plans to own and operate a backup communications center that is geographically separated from the primary site. The service performs monthly tests of the backup site’s ability to provide the communications uplink to the commercial satellites. Federal agencies rely on the commercial satellite service provider’s security techniques for the TT&C links, satellites, and satellite control ground stations. Figure 3 graphically depicts the areas not controlled by federal agencies. To mitigate the risk associated with not controlling aspects of commercial satellite security other than protecting the data links and communications ground stations, federal agencies attempt to specify availability and reliability requirements, but they acknowledge having had limited influence over security techniques employed by commercial satellite service providers. Federal officials stated that they are usually constrained by the availability and reliability levels that can be provided by their telecommunications service providers. For example, for one program, an FAA contract requires 99.7 percent availability in recognition of the satellite service provider’s limitations, though the agency typically receives 99.8 percent. However, FAA would prefer 99.999 percent availability on this program’s satellite communications, which is similar to the reliability level being received from terrestrial networks that FAA uses where available. According to one FAA official, greater satellite reliability could be gained by having multiple satellite service providers furnish communications over the same regions, but this approach is too costly. Although maintaining established or contracted reliability levels generally requires that service providers maintain some level of security, federal officials stated that their agencies cannot usually require commercial satellite service providers to use specific security techniques. Commercial satellite service providers have established operational procedures, including security techniques, some of which, according to officials, cannot be easily changed. For example, once a satellite is launched, additional hardening or encryption of the TT&C link is difficult, if not impossible. Some service providers offer the capability to encrypt the command uplinks. According to FAA officials, FAA is in the process of performing risk assessments, in compliance with its own information systems security policies, on the commercial services (including satellite services) that it acquires. Based on these risk assessments, FAA officials plan to accredit and certify the security of the agency’s program that relies on commercial satellites. Federal policy governing agencies’ actions regarding the security of commercial satellite systems is limited, in that it (1) pertains only to satellites used for national security purposes, (2) addresses security techniques associated with links only, and (3) does not have an enforcement mechanism for ensuring compliance. Although the Executive Branch has recently acknowledged these policy limitations, we are not aware of specific actions to address them. NSTISSP No. 12, the current policy governing satellite system security, applies only to U.S. space systems (U.S. government-owned or commercially owned and operated space systems) that are used for national security information and to imagery satellites that are or could be used for national security purposes during periods of conflict or war. It does not apply to systems that process sensitive, non–national-security information. Issued by the National Security Telecommunications and Information Systems Security Committee (now the Committee on National Security Systems (CNSS)), NSTISSP No. 12 has as its primary objective “to ensure that information assurance is factored into the planning, design, launch, sustained operation, and deactivation of U.S. space systems used to collect, generate, process, store, display, or transmit/receive national security information, as well as any supporting or related national security systems.” NSTISSP No. 12 also suggests that federal agencies may want to consider applying the policy’s information assurance requirements to those space systems that are essential to the conduct of agencies’ unclassified missions, or to the operation and maintenance of critical infrastructures. In addition to having a focus only on national security, the policy is further limited in that it addresses security techniques only for the links. It does not include physical security requirements for the satellites or ground stations. Specifically, for satellite systems to which it applies, NSTISSP No. 12 states that approved U.S. cryptographies shall be used to provide confidentiality for the (1) command and control uplinks, (2) data links that transmit national security information between the ground and the space platforms, (3) cross-links between space platforms, and (4) downlinks from space platforms to mission ground or processing centers. Also, there is no enforcement mechanism to ensure agency compliance with the policy. According to one NSA official on the CNSS support staff, enforcement of such policies has always been a problem, because no one has the authority to force agencies’ compliance with them. According to some agency officials, agencies typically do not test their service providers’ implementation of security procedures. According to the federal and commercial officials involved in our study, no commercial satellite is currently fully compliant with NSTISSP No. 12, and gaining support to build compliant systems would be difficult. According to commercial satellite industry officials, there is no business case for voluntarily following the NSTISSP No. 12 requirements and implementing them in the satellites and ground stations, including networks that are currently being developed. Commercial satellite service providers also raised concerns about the impact of NSTISSP No. 12 on their future commercial satellite systems. Several officials stated that if compliance were required, it would significantly increase the complexity of managing the satellites, because encryption key management is cumbersome, and appropriately controlling access to the hardware is difficult in global companies that have many foreign nationals. Also, commercial satellite service providers stated that encrypting the TT&C links could increase the difficulty of troubleshooting, for example, because the time it takes to encrypt and then decrypt a command could become significant when a TT&C problem arises. Other issues raised that make NSTISSP No. 12 difficult to implement include the following: Some satellite service providers view compliance with it as not necessary for selling services to the government, since in the past agencies have used satellites that did not comply with prior security policy. For example, DOD has contracted for services on satellites that were not compliant with the previous and existing policy for various reasons. However, at times, noncompliant satellites have been DOD’s only option. Commercial clients will likely be unwilling to pay the additional cost associated with higher levels of encryption. Significant costs would include licensing agreements and redesigning hardware for new encryption technologies. Satellite industry officials stated that their experience shows that encryption does not really provide much greater security than other techniques that protect TT&C and data links. Notwithstanding the above issues, in response to the policy’s limitations, DOD officials from the Office of the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence stated that the department had started drafting a policy that would require all commercial satellite systems used by DOD to meet NSTISSP No. 12 requirements. This draft policy includes a waiver process requiring prior approval before any satellite system could be used that did not meet the security requirements. If approved, this policy would apply only to DOD. DOD officials are anticipating that this policy will be approved by the end of 2002. In addition to DOD’s efforts, a CNSS official stated that a draft policy was developed to address the lack of national policy or guidance for the assurance of non–national-security information. Although this policy was broad in scope, covering many aspects of information assurance, this official stated that satellite security could be included in its scope. However, this official also stated that the CNSS’s efforts ended in April 2002 when it sent the draft policy to the Director of the Office of Management and Budget (OMB) for consideration, because the CNSS lacks authority in the area of non–national-security information. In transmitting the draft policy to the Director, OMB, the CNSS Chair encouraged the development of this policy as a first step in establishing a national policy addressing the protection of information technology systems that process sensitive homeland security information, as well as information associated with the operation of critical infrastructures. According to an OMB official, the draft policy is valuable input for future policy decisions related to protecting government information. Recognizing that space activities are indispensable to our national security and economic vitality, on May 8, 2002, the President’s National Security Advisor sent a memorandum to top cabinet officials stating that she plans to recommend that the White House initiate a review of U.S. space policies that have been in place since 1996. To date, we are not aware of specific actions taken in response to the draft policy sent to OMB and the National Security Advisor’s memorandum. Without appropriate governmentwide policy to address the security of all satellite components and of non–national-security information, federal agencies may not, for information with similar sensitivity and criticality, consistently (1) secure data links and communication ground stations or (2) use satellites that have certain security controls that enhance availability. As a result, federal agencies risk losing needed capabilities in the event of the exploitation of satellite system vulnerabilities. PDD 63 was issued to improve the federal approach to protecting our nation’s critical infrastructures by establishing partnerships between private-sector entities and the federal government. Although this directive addressed the satellite vulnerabilities of GPS and led to a detailed vulnerability assessment, the satellite industry has not received focused attention as part of this national effort. Given the importance of commercial satellites to our nation’s economy, the federal government’s growing reliance on them, and the dependency of many other infrastructures on satellites, not including them in our national CIP approach creates the risk that these critical components of our information and communication infrastructure may not receive needed attention. Both PDD 63 and the report of the President’s Commission on Critical Infrastructure Protection (October 1997) addressed satellite vulnerabilities of the GPS and made several recommendations to the Secretary of Transportation, including to fully evaluate these vulnerabilities and actual and potential sources of interference to the system. In August 2001, the John A. Volpe Transportation Systems Center issued a report that includes an assessment of the vulnerabilities of the GPS; analysis of civilian aviation, maritime, and surface uses; assessment of the ways that users may be affected by short- or long-term GPS outages; and recommendations to minimize the safety and operational impacts of such outages. One overarching finding was that because of the increasing reliance of transportation on GPS, the consequences of loss of the signal could be severe in terms of safety and of environmental and economic damage to the nation. Despite the focused attention on GPS, other aspects of the satellite industry have not received national attention. In PDD 63, commercial satellites were not identified as a critical infrastructure (or as part of one), and thus are not specifically included as part of our nation’s approach to protecting critical infrastructures. Further, PDD 63 does not explicitly include the commercial satellite industry as part of the information and communications infrastructure sector, nor does the newly issued national strategy for homeland security. Although there have been discussions about expanding the coverage of individual sectors (particularly since the events of September 11, 2001), National Telecommunications and Information Administration (NTIA) officials stated that there are no specific plans to build better partnerships with satellite builders and operators as part of their efforts. CIAO officials also told us that there are no specific plans to include commercial satellite companies in current national efforts. However, CIAO added that some of the current infrastructure sectors may address satellites in their plans for industry vulnerability assessments and remediation, since some of these infrastructures rely on satellites for communications or other functions, such as tracking shipments or trucks, or monitoring the condition of equipment. The telecommunications ISAC reiterated NTIA’s and CIAO’s comments that there are no specific plans to include satellites in national CIP efforts. The ISAC for the telecommunications sector, recognized by the President’s National Security Council in January 2000, is the National Coordinating Center for Telecommunications (NCC), which is operated by the National Communications System. As such, NCC is responsible for facilitating the exchange of information among government and industry participants regarding computer-based vulnerability, threat, and intrusion information affecting the telecommunications infrastructure. Also, the center analyzes data received from telecommunications industry members, government, and other sources to avoid or lessen the impact of a crisis affecting the telecommunications infrastructure. Since its recognition as an ISAC, NCC membership has expanded beyond traditional telecommunications entities to include some aerospace companies such as Boeing and Raytheon, but the ISAC does not specifically focus on commercial satellites. Officials from one of the satellite service providers told us that they would endorse an ISAC-like forum to discuss vulnerabilities to commercial and military satellites. In July 2002, we recommended that when developing the strategy to guide federal CIP efforts, the Assistant to the President for National Security Affairs, the Assistant to the President for Homeland Security, and the Special Advisor to the President for Cyberspace Security ensure, among other things, that the strategy includes all relevant sectors and defines the key federal agencies’ roles and responsibilities associated with each of these sectors. Given the importance of satellites to the national economy, the federal government’s growing reliance on them, and the many threats that face them, failure to explicitly include satellites in the national approach to CIP leaves a critical aspect of the national infrastructure without focused attention. Commercial satellite service providers use a combination of techniques to protect their systems from unauthorized use and disruption, including hardware on satellites, physical and logical controls at ground stations, and encryption of the links. Although this level of protection may be adequate for many government requirements, commercial satellite systems lack the security features used in national security satellites for protection against deliberate disruption and exploitation. Federal agencies reduce the risk associated with their use of commercial satellites by controlling the satellite components within their responsibility—primarily the data links and communication ground stations. But the satellite service provider is typically responsible for most components—the satellite, TT&C links, and the satellite control ground stations. Because federal agencies rely on commercial satellite service providers for most security features, they also reduce their risk by having redundant capabilities in place. However, national satellite protection policy is limited because it pertains only to satellite systems that are used for national security information, addresses only techniques associated with the links, and does not have an enforcement mechanism. Recent initiatives by the Executive Branch have acknowledged these policy limitations, but we are not aware of specific actions taken to address them. Satellites are not specifically identified as part of our nation’s critical infrastructure protection approach, which relies heavily on public-private partnerships to secure our critical infrastructures. As a result, a national forum to gather and share information about industrywide vulnerabilities of the satellite industry does not exist, leaving a national critical infrastructure without focused attention. We recommend that in pursuing the draft policy submitted to OMB for completion and the recommended review of U.S. space policies, the Director of OMB and the Assistant to the President for National Security Affairs review the scope and enforcement of existing security-related space policy and promote the appropriate revisions of existing policies and the development of new policies to ensure that federal agencies appropriately address the concerns involved with the use of commercial satellites, including the sensitivity of information, security techniques, and enforcement mechanisms. Considering the importance of satellites to our national economy, the government’s growing reliance on them, and the threats that face them, we recommend that the Assistant to the President for National Security Affairs, the Assistant to the President for Homeland Security, and the Special Advisor to the President for Cyberspace Security consider recognizing the satellite industry as either a new infrastructure or part of an existing infrastructure. We received written comments on a draft of this report from the Deputy Assistant Secretary of Defense, Command, Control, Communications, Intelligence, Surveillance, and Reconnaissance (Space and Information Technology Programs), Department of Defense; the Chief of the Satellite Communications and Support Division, United States Space Command, Department of Defense; the Chief Financial Officer/Chief Administrative Officer, National Oceanic and Atmospheric Administration, Department of Commerce; and the Associate Deputy Administrator for Institutions, National Aeronautics and Space Administration. The Departments of Defense and Commerce and the National Aeronautics and Space Administration concurred with our findings and recommendations (see apps. II, III, and IV, respectively) and provided technical comments that have been incorporated in the report, as appropriate (some of these technical comments are reproduced in the appendixes). We also received technical oral comments from officials from the Critical Infrastructure Assurance Office, Department of Commerce; Federal Aviation Administration, Department of Transportation; Office of Management and Budget; and United States Secret Service, Department of Treasury; in addition, we received written and oral technical comments from five participating private-sector entities. Comments from all these organizations have been incorporated into the report, as appropriate. We did not receive comments from the Special Advisor to the President for Cyberspace Security. As we agreed with your staff, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to other interested congressional committees and the heads of the agencies discussed in this report, as well as the private-sector participants. The report will also be available on GAO’s website at www.gao.gov. If you have any questions about matters discussed in this report, please contact me at (202) 512-3317 or contact Dave Powner, Assistant Director, at (303) 572-7316. We can also be reached by E-mail at daceyr@gao.gov and pownerd@gao.gov, respectively. Contributors to this report include Barbara Collier, Michael Gilmore, Rahul Gupta, Kevin Secrest, Karl Seifert, Hai Tran, and Jim Weidner. Our objectives were to determine (1) what security techniques are available to protect satellite systems from unauthorized use, disruption, or damage; (2) how federal agencies reduce the risks associated with their use of commercial satellite systems; and (3) what federal critical infrastructure protection efforts are being undertaken to address satellite system security through improved government/private-sector cooperation. To accomplish these objectives, we reviewed technical documents, policy documents, and directives, and we interviewed pertinent officials from federal agencies and the private sector involved in manufacturing and operating satellites and providing satellite services. To determine what security techniques are available to protect satellite systems from unauthorized use, disruption, or damage, we reviewed technical documents and policy, such as NSTISSP No. 12 and various other sources, and we interviewed pertinent federal officials from the Department of Defense (DOD); the Federal Aviation Administration (FAA); the National Aeronautics and Space Administration (NASA), including the Goddard and Marshall Space Flight Centers; the National Oceanic and Atmospheric Administration (NOAA); the National Security Agency (NSA); and the Department of Treasury’s United States Secret Service. The DOD organizations whose documentation we reviewed and whose officials we interviewed included the Air Force; the Army; the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence; the Cheyenne Mountain Air Force Station; the Defense Information Systems Agency; the National Security Space Architect; the Navy; and the U.S. Space Command. In addition, we reviewed documentation and interviewed officials from private-sector organizations that manufacture and operate satellite systems, including Intelsat, Lockheed Martin, Loral Space & Communications, Ltd. (Loral Skynet and Loral Space Systems groups), Northrop Grumman TASC, the Satellite Industry Association, and W.L. Pritchard & Co., L.C. We identified these organizations through relevant literature searches, discussions with organizations, and discussions with GAO personnel familiar with the satellite industry. We did not develop an all-inclusive list of security techniques, but we attempted to establish the most commonly used of the security techniques available. To determine how federal agencies reduce the risks associated with their use of commercial satellite systems, we identified and reviewed relevant federal policy, including National Security Telecommunications and Information Systems Security Committee policies and applicable federal agency policies, such as the FAA’s Information Systems Security Program Handbook. We also reviewed documentation and interviewed federal officials from DOD, FAA, NASA, NSA, and NOAA. In addition, in meetings with commercial service providers holding government contracts, we discussed any special requirements placed on commercial service providers by federal agencies. To determine what federal critical infrastructure protection (CIP) efforts were being undertaken to address satellite system security, we reviewed various orders, directives, and policies, such as Executive Order 13231 and PDD 63. In addition, we interviewed pertinent federal officials from the Critical Infrastructure Assurance Office, National Communications System/National Coordinating Center for Telecommunications, and National Telecommunications and Information Administration. Further, in interviews with commercial service providers, we discussed their involvement in national CIP-related activities. We performed our work in Washington, D.C.; Bedminster, New Jersey; Colorado Springs, Colorado; and Palo Alto, California, from December 2001 through June 2002, in accordance with generally accepted government auditing standards. We did not evaluate the effectiveness of security techniques being used by federal agencies and the private sector, or of the techniques used by federal agencies to reduce the risks associated with their use of commercial satellite systems. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | Government and private-sector entities rely on satellites for services such as communication, navigation, remote sensing, imaging, and weather and meteorological support. Disruption of satellite services, whether intentional or not, can have a major adverse economic impact. Techniques to protect satellite systems from unauthorized use and disruption include the use of robust hardware on satellites, physical security and logical access controls at ground stations, and encryption of the signals for tracking and controlling the satellite and of the data being sent to and from satellites. When using commercial satellites, federal agencies reduce risks by securing the data links and ground stations that send and receive data. However, federal agencies do not control the security of the tracking and control links, satellites, or tracking and control ground stations, which are typically the responsibility of the satellite service provider. It is important to the nation's economy and security to protect against attacks on its computer-dependent critical infrastructures (such as telecommunications, energy, and transportation), many of which are privately owned. In light of the nation's growing reliance on commercial satellites to meet military, civil, and private sector requirements, omitting satellites from the nation's approach to protecting critical infrastructure leaves an important aspect of our nation's infrastructures without focused attention. |
Emergency plans for commercial nuclear power plants are intended to protect public health and safety whenever plant accidents cause radiation to be released to the environment. Since the 1979 accident at the Three Mile Island nuclear power plant, significantly more attention has been focused on emergency preparedness. For example, the NRC Authorization Act for fiscal year 1980 established a requirement for off-site emergency planning around nuclear power plants and allowed NRC to issue a nuclear plant operating license only if it determines that there is either a related state or local emergency preparedness plan that provides for responding to accidents at the specific plant and complies with NRC’s emergency planning guidelines or state, local, or facility plan that provides reasonable assurance that public health and safety are not endangered by the plants’ operation in the absence of a related state or local emergency preparedness plan. In November 1980, NRC and FEMA published regulations that provided the criteria for radiological emergency plans. The regulations include emergency standards for on- and off-site safety and require that emergency plans be prepared to cover the population within a 10-mile radius of a commercial nuclear power plant. In addition, state plans must address measures necessary to deal with the potential for the ingestion of radioactively contaminated foods and water within a 50-mile radius. NRC and FEMA have supplemented the criteria several times since 1980. For example in July 1996, the agencies endorsed the prompt evacuation of the public within a 2-mile radius and about 5 miles downwind of the plant, rather than sheltering the public, in the event of a severe accident. FEMA and the affected state and local governments within the 10-mile emergency planning zone conduct exercises at least every 2 years at each nuclear power plant site. In addition, each state with a nuclear power plant must conduct an exercise within the 50-mile zone at least every 6 years. The exercises are to test the integrated capabilities of appropriate state and local government agencies, facility emergency personnel, and others to verify their capability to mobilize and respond if an accident occurs. Before the exercises, generally, FEMA and state officials not involved in them agree to the accident scenarios and the aspects of emergency preparedness that will be tested. In addition, NRC requires plants to conduct exercises of their on-site plans. According to NRC staff, the plants usually conduct their exercises as part of FEMA’s biennial exercises. Indian Point 2 is one of the 104 commercial nuclear power plants nationwide licensed to operate. The Indian Point site, which is called the Indian Point Energy Center, has one closed and two operating plants. The other operating plant is referred to as Indian Point 3. Over the years, Consolidated Edison’s efforts to improve emergency preparedness at Indian Point 2 were not completely successful, and the company experienced recurring weaknesses in its program, as we reported in July 2001. The four New York counties surrounding the plant made improvements in their emergency response programs but suggested better communication among NRC, FEMA, and nonstate entities in nonemergency situations. Beginning in 1996, NRC identified numerous weaknesses with the emergency preparedness program at Indian Point 2. NRC found, for example, that Consolidated Edison was not training its emergency response staff in accordance with required procedures, and some individuals had not taken the annual examination and/or participated in a drill or exercise within a 2-year period, as required. In response, Consolidated Edison disciplined the individuals responsible, developed an improved computer-based roster containing the current status of the training requirements for emergency response personnel, and began a process to distribute training modules to those employees before their qualifications expired. NRC relied on Consolidated Edison to take corrective actions for other emergency preparedness problems and weaknesses. However, the company did not correct the weaknesses identified. For example, in 1998 and again in 1999, NRC identified problems with activating the pagers used to alert the plant’s staff about an emergency, as well as other communication weaknesses. In 1999, NRC concluded that Consolidated Edison lacked the ability to detect and correct problems and determine their causes, resulting in weak oversight of the emergency preparedness program. In response, NRC staff said that they met with the company’s managers to specifically discuss and express NRC’s concerns. Similarly, NRC identified emergency preparedness weaknesses when evaluating Indian Point 2’s response to the February 2000 event. For example, NRC found that Consolidated Edison did not activate its emergency operations facilities within the required 60 minutes, primarily because of the complex process used to page the emergency response staff. This problem delayed the on-site response. NRC’s Office of the Inspector General also identified emergency preparedness issues, including the state’s difficulties getting information about the emergency from Consolidated Edison and the fact that English is a second language for many who lived within 10 miles of the plant. The Office of the Inspector General concluded, and NRC agreed, that recurring uncorrected weaknesses at Indian Point 2 had played a role in the company’s response during the February 2000 event. However, NRC concluded that Consolidated Edison had taken the necessary steps to protect public health and safety. Consolidated Edison subsequently evaluated its entire emergency preparedness program to determine the causes of the deficiencies and to develop corrective actions. Consolidated Edison concluded that senior management did not pay sufficient attention to the emergency preparedness program or problems at Indian Point 2 because these problems were not viewed as a high priority warranting close attention and improvement. As a result, emergency preparedness had relatively low visibility, minimal direction, and inadequate resources. The company also found that (1) the emergency response organization had been stagnant, understaffed, poorly equipped, and consistently ineffective; (2) the emergency manager performed collateral and competing duties; and (3) for a time, a contractor held the manager’s position. Furthermore, the professional development and continuing training of the emergency planning staff had been minimal. The company undertook initiatives to address the deficiencies noted. Despite these initiatives, in April 2001, NRC reported that it had found problems similar to those previously identified at Indian Point 2. NRC again found weaknesses in communication and information dissemination. It also found that the utility’s training program had not prevented the recurrence of these issues during on-site drills and that its actions to resolve other weaknesses had not been fully effective. NRC said that Consolidated Edison had identified the major issues in its business plan, which, if properly implemented, should improve emergency preparedness at the plant. In commenting on a draft of our July 2001 report, NRC noted that its April 2001 inspection report concluded that Consolidated Edison’s emergency preparedness program would provide reasonable assurance of protecting the public. The need to improve communication between Consolidated Edison and the counties about the extent of the emergency and the potential impact on the public was highlighted during the February 2000 event. At that time, Consolidated Edison reported that a radioactive release had occurred but that it posed no danger to the public. County officials, on the other hand, reported that no release had occurred. This contradictory information led to credibility problems with the media and the public. Before the emergency, the counties did not have a defined process to determine what information they needed and how they would present the information to the public. At the time of the February 2000 event, the Radiological Emergency Data Form that Consolidated Edison used to inform local jurisdictions provided for one of three choices about a release of radioactive materials: (1) no release (above technical specification limits), (2) a release to the atmosphere above technical specification limits, and (3) a release to a body of water (above technical specification limits). In April 2000, Consolidated Edison, in partnership with the state and counties, revised the form to ensure that all affected parties were “speaking with one voice” when providing the media and the public with information. The change to the form provided for one of four choices: (1) no release, (2) a release below federally approved operating limits (technical specifications) and whether it was to the atmosphere or to water, (3) a release above federally approved operating limits and whether to the atmosphere or to water, and (4) an unmonitored release requiring evaluation. The counties had also taken some other actions to improve their radiological emergency programs. For example, all four counties agreed to activate their emergency operation centers at the “alert” level (the second lowest of four NRC classifications). Before the February 2000 event, the counties differed on when they would activate their centers, with one county activating its center at the alert level and the other three counties at the site-area emergency level (the next level above an alert). As a result, once the first county activated its center during the event, the media questioned why the other three counties had not done so. The counties also connected the “Executive Hot Line,” which linked the state, four counties, and governor, to the emergency operations facility at Indian Point 2 to establish and maintain real-time communications during an emergency. In addition to these actions, county officials suggested to us in 2001 that other changes to improve communications among NRC, FEMA, and nonstate entities could be taken. In particular, county officials said that since they are responsible for radiological emergency preparedness for Indian Point 2, NRC and FEMA should communicate directly with them during nonemergency situations. Absent these direct communications, the counties were not privy to issues or initiatives that could affect their emergency preparedness programs. NRC staff tried to meet every 5 years with officials from all states that have operating nuclear power plants. NRC staff told us that they met with some states more frequently and that the requests to meet exceeded the agency’s capability. Although NRC’s policy was to meet at the state level, its staff believed that local officials had various options for meeting with NRC. For example, local officials could participate in the meetings held at least every 5 years with the states and could interact with NRC staff during public meetings, including those held annually for all plants. Emergency preparedness officials from the four counties around Indian Point 2 said that they did not believe that public meetings were the appropriate forums for government-to-government interactions. Therefore, the counties suggested that NRC should meet with them at least annually. According to NRC staff, routinely communicating with local officials has resource implications and involves tradeoffs with its other efforts, such as maintaining safety and enhancing the effectiveness and efficiency of operations. However, NRC, at the time of our review, had not assessed the costs and benefits of meeting with local officials nor the impact that such meetings might have. FEMA generally implements its programs through the states and relies on the states to communicate relevant information to local jurisdictions. County officials responsible for emergency preparedness at Indian Point 2 identified instances in which this method of communicating with local jurisdictions had not been effective. For example, both New York State and county officials told us that the February 2000 event identified the need for flexibility in FEMA’s off-site exercises. County officials said they responded to the 2000 event as they would have responded during FEMA’s exercises, which are conducted to the general emergency level (the highest of NRC’s action level classifications). Yet, they noted, the response for an alert like the one that occurred in 2000 is significantly different from the response needed during a general emergency, when a significant amount of radiation would be released from the plant site. State and county officials suggested that it would be more realistic to periodically conduct biennial exercises at the lower alert level, which, they noted (and NRC data confirmed), occur more frequently than a general emergency. In commenting on a draft of our report, FEMA said that the emergency plans for the four New York counties require them to conduct off-site monitoring and dose calculations at the alert level. FEMA officials also noted that the agency’s regulations allow state and local jurisdictions the flexibility to structure the exercise scenarios to spend more time at the alert level and less at the general emergency level. Nevertheless, county officials who participated in the exercises were not aware of the flexibility allowed by FEMA’s regulations, in part because they did not participate in developing the exercise scenarios. In reviewing NRC’s reports on its on-site inspections and evaluations of the plant’s emergency preparedness exercises or drills completed since we issued our 2001 report, we found that the facility’s emergency preparedness program has continued to experience problems or weaknesses. For example, NRC reported that, in an emergency exercise conducted last fall, the facility gave out unclear information about the release of radioactive materials, which also happened during the February 2000 event. In addition, NRC reported that several actions to correct previously identified weaknesses had not been completed. For example, NRC noted that the timely and accurate dissemination of information was identified as a weakness in the fall 2002 exercise and had been documented previously in drill critique and condition reports. In addition, in our 2001 report, we noted that NRC’s Office of the Inspector General found that, during the February 2000 event, the Indian Point plant’s technical representatives did not arrive on time at the local counties’ emergency operations centers. To help address this problem, Consolidated Edison said that it would install a videoconferencing system in the centers to enhance communications between the plant and the off- site officials. According to county officials, the videoconferencing system had not been installed as of February 2003. With respect to our 2001 recommendation that NRC and FEMA reassess their practices of primarily communicating with state officials during nonemergency situations, federal and local officials indicated that little has changed since our report. NRC officials told us that they did reassess their policy since our report was issued and determined that no changes were needed. According to FEMA officials, the agency will continue to work with state and local officials to carry out its emergency preparedness program but has not made any changes regarding nonemergency communication with state and local officials. Given this history of inadequate efforts to address weaknesses in Indian Point 2’s emergency preparedness program, we continue to believe that both NRC and the plant owner could benefit from being more vigilant in correcting problems as they are identified. In addition to improving the plant’s program, a better track record in addressing these problems could go a long way in helping alleviate the heightened concerns in the surrounding communities about the plant’s safety and preparedness for an emergency. Similarly, more frequent, direct communication by NRC and FEMA with officials of the surrounding counties could improve local emergency preparedness programs and, in turn, help local officials better communicate with their constituents about the plant’s safety and preparedness for an emergency. On August 1, 2002, the Governor of New York announced that James Lee Witt Associates would conduct a comprehensive and independent review of emergency preparedness around the Indian Point facility and for that portion of New York State in proximity to the Millstone nuclear power plant in Waterford, Connecticut. According to Witt Associates, the review encompassed many related activities that were designed, when taken together, to shed light on whether the jurisdictions’ existing plans and capabilities are sufficient to ensure the safety of the people of the state in the event of an accident at one of the plants, and how the existing plans and capabilities might be improved. According to Witt Associates, it has considered and incorporated public comments on a January 2003 draft of its report and plans to issue the final report this month. We have not evaluated the Witt report or verified the accuracy of its findings and conclusions. We did note that the draft report identifies various issues—such as planning inadequacies; expected parental behavior that would compromise school evacuation; difficulties in communications; the use of outdated technologies; problems caused by spontaneous evacuation in a post September 11, 2001, environment; and a limited public education effort—that may warrant consideration at Indian Point and nationwide. The draft Witt report concludes that NRC and FEMA regulations need to be revised and updated. We understand that FEMA agreed, to an extent, in its review of the draft report. According to the agency, the draft report raises a number of issues that should be considered for enhancing the level of preparedness in the communities surrounding the Indian Point facility, such as better public education, more training of off-site responders, and improved emergency communications. In addition, FEMA stated that some of these issues should be evaluated for their applicability nationwide. However, FEMA also said that a number of the issues raised in the draft report were not supported by its own exercise evaluations, plan reviews, and knowledge of the emergency preparedness program. According to NRC, the draft report gives “undue weight” to the impact of a terrorist attack. The agency said that it saw no difference between emergency plans for releases caused by terrorist acts and those caused by equipment malfunctions. | After the September 11, 2001, terrorist attacks, emergency preparedness at nuclear power plants has become of heightened concern. Currently, 104 commercial nuclear power plants operate at 64 sites in 32 states and provide about 20 percent of the nation's electricity. In July 2001, GAO reported on emergency preparedness at the Indian Point 2 nuclear power plant in New York State. This testimony discusses GAO's findings and recommendations in that report and the progress the plant, the Nuclear Regulatory Commission (NRC), and the Federal Emergency Management Agency (FEMA) have made in addressing these problems. GAO also provides its thoughts on the findings of a soon-to-be-issued report (the Witt report) on emergency preparedness at Indian Point and the Millstone nuclear power plant in Connecticut, and the implications of that report for plants nationwide. Since 2001, the Entergy Corporation has assumed ownership of the Indian Point 2 plant from the Consolidated Edison Company of New York (ConEd). In 2001, GAO reported that, over the years, NRC had identified a number of emergency preparedness weaknesses at Indian Point 2 that had gone largely uncorrected. ConEd had some corrective actions underway before a 2000 event raised the possibility of a leak of radioactively contaminated water into the environment. ConEd took other actions to address problems during this event. According to NRC, more than a year later, the plant still had problems similar to those previously identified--particularly in the pager system for activating emergency personnel. However, NRC, in commenting on a draft of GAO's report, stated that ConEd's emergency preparedness program could protect the public. Four counties responsible for responding to a radiological emergency at Indian Point 2 had, with the state and ConEd, developed a new form to better document the nature and seriousness of any radioactive release and thus avoid the confusion that occurred during the February 2000 event. Because they are the first responders in any radiological emergency, county officials wanted NRC and FEMA to communicate more with them in nonemergency situations, in addition to communicating through the states. However, NRC and FEMA primarily rely on the states to communicate with local jurisdictions. Since GAO's 2001 report, NRC has found that emergency preparedness weaknesses have continued. For example, NRC reported that, during an emergency exercise in the fall of 2002, the facility gave out unclear information about the release of radioactive materials, which had also happened during the February 2000 event. Similarly, in terms of communicating with the surrounding jurisdictions, little has changed, according to county officials. County officials told GAO that a videoconference system--promised to ensure prompt meetings and better communication between the plant's technical representatives and the counties--had not been installed. In addition, NRC and FEMA continue to work primarily with the states in nonemergency situations. Although they note that there are avenues for public participation, none of these is exclusively for the county governments. GAO did not evaluate the draft Witt report or verify the accuracy of its findings. The draft Witt report is a much larger, more technical assessment than the 2001 GAO report. While both reports point out difficulties in communications and planning inadequacies, the draft Witt report concludes that the current radiological response system and capabilities are not adequate to protect the public from an unacceptable dose of radiation in the event of a release from Indian Point, especially if the release is faster or larger than the release for which the programs are typically designed. GAO is aware that, in commenting on a draft of the Witt report, FEMA disagreed with some of the issues raised but said the report highlights several issues worth considering to improve emergency preparedness in the communities around Indian Point and nationwide. NRC concluded that the draft report gives "undue weight" to the impact of a terrorist attack. |
When the air quality problem occurred at NIEHS in 1981, far less was known about indoor air pollution than is known today, and there was a strong emphasis on energy conservation. As a result of the emphasis on energy conservation at that time, building engineers at facilities across the country had reduced the air exchange rate of air handling systems and initiated other conservation measures. NIEHS began moving employees into its new facility at Research Triangle Park, North Carolina, on April 11, 1981. The facility was constructed in five modules, each with its own air handling system. Modules A and B were administrative spaces; modules C, D, and E were laboratories. According to NIEHS officials, the laboratory modules required their air handling systems to make a 100-percent exchange of the air, whereas the administrative modules had varying amounts of fresh air added, depending on the outside temperature. Shortly after moving into the new facility, some employees in module A began complaining of respiratory problems and eye and throat irritation. Most of the complaints came from the second floor of module A, where most of the administrative employees had office space. One employee went to the hospital on April 20, 1981, complaining of respiratory problems, and a subsequent worker’s compensation claim attributed the illness to her work environment. According to the U.S. Consumer Product Safety Commission, formaldehyde is normally present at low levels, usually less than 0.03 ppm, in both outdoor and indoor air. Moreover, homes or offices furnished with products that release formaldehyde into the air can have levels of more than 0.03 ppm. The Occupational Safety and Health Administration’s (OSHA) occupational safety standard for formaldehyde in 1981 was 3.0 ppm; the current standard is 0.75 ppm. Indoor formaldehyde levels can vary greatly, depending on the type of building materials and furnishings used, the length of time that these materials have had to off-gas, the temperature and humidity, and the amount of fresh air brought into the building. Indoor formaldehyde levels can be reduced by using materials that contain less formaldehyde, airing the materials out before allowing employees into the space, and increasing the amount of fresh air brought into the building. Although formaldehyde levels can also vary with the temperature and humidity, these factors are controlled in an occupied building and, thus, may not have much effect. In response to requests from your office, two letter reports were issued, one by NIEHS and the other by the Deputy IG of the Department of Health and Human Services, addressing issues involving complaints by employees that they may have become ill after being exposed to the air in the new facility. NIEHS’ report, dated March 31, 1997, addresses the events and health effects that may have been caused by exposure to formaldehyde when the new facility was first occupied in April 1981. The IG’s letter report, dated August 15, 1997, addresses NIEHS’ (1) grievance procedures and treatment of employees, (2) compliance with appropriate policies and procedures regarding employee complaints, and (3) venting and other practices to ensure proper ventilation before the facility was occupied. NIEHS does not have data showing what the air quality was inside its new facility when employees began moving into it on April 11, 1981, or during the first 5.5 months that the building was occupied. NIEHS officials said that such monitoring for airborne contaminants was not common practice at the time. They also said that during these 5.5 months, the air handling system was adjusted to improve the air distribution in module A to help alleviate respiratory problems that some employees were experiencing because it was not immediately recognized that indoor air contaminants could be originating from within the space. In response to the concerns of some employees, NIEHS contracted with the School of Public Health at the University of North Carolina to monitor the air throughout modules A and B and to analyze the results to determine the quality of the air. The initial testing began in September 1981. At our request, an indoor air expert at the Environmental Protection Agency (EPA) extrapolated the range of possible formaldehyde levels in module A when the employees first moved into the space in April 1981. He concluded that those levels were probably higher than the levels measured in September of that year. The initial monitoring, which took place on September 28 and 29, 1981, found that the formaldehyde levels on the second floor of module A ranged from 0.1 to 0.34 ppm—well below OSHA’s standard in effect in 1981. Subsequent monitoring between January 20 and March 1, 1982, by the School of Public Health and others showed formaldehyde levels that were no higher than 0.044 ppm. The monitors were placed on top of desks, in closed wooden bookcases, and in other locations and attached to the clothing of some employees. Formaldehyde levels, however, may have been higher when the employees first moved into the space than when the measurements were taken because research shows that formaldehyde levels in enclosed spaces decrease rapidly during the first few days to several weeks. The contractor also sampled the air in modules A and B for 22 other organic substances and detected minute amounts—less than 0.1 ppm—for 10 of these substances, such as benzene, toluene, and trichloroethane. According to the contractor, the level for each of these substances was well below the standard in effect at the time. An air quality survey done by the National Institute for Occupational Safety and Health for NIEHS in March 1982 reported that the primary source of the formaldehyde was the particle board in the office furnishings. NIEHS officials said that adjustments were made to balance the air flow and introduce more outside air in module A during the first 5.5 months to alleviate respiratory problems that some employees reported. According to the officials, because the air was not being monitored during this period, the levels of formaldehyde that the employees were exposed to are unknown. The officials also said that in the early 1980s, air quality measurements were not usually made when employees first moved into buildings because indoor air quality was generally not recognized as the serious health concern that it is today. We asked the indoor air expert from EPA to use NIEHS’ air monitoring data to extrapolate a range of possible formaldehyde levels in module A when the employees first moved into the space in April 1981. The expert said that the limited amount of data available made it difficult to estimate the possible formaldehyde levels for the period before NIEHS began monitoring the air. However, with the available data as input for a formaldehyde decay curve, the expert’s mathematical extrapolation showed that the initial formaldehyde levels probably ranged between 1.2 and 7.5 ppm when the employees first moved into module A—higher than the levels measured in September 1981. In light of his knowledge about formaldehyde off-gassing from building materials and office furnishings, and the variables that can affect the rate of off-gassing, he said he believed that the actual levels were near the lower end of the range and were probably less than 2.0 ppm, which would be below the occupational safety standard that existed in 1981. Moreover, he stated that the formaldehyde levels probably declined quickly during the first few days to several weeks and continued to decline over time. NIEHS officials, however, do not believe that initial formaldehyde levels can be accurately modeled because of the multiple variables that could have affected the concentrations. They said that the lack of reliable information on such variables as the amount of formaldehyde in the materials when manufactured, the temperature and humidity conditions during the period, and the air exchange rates makes any extrapolation results highly suspect and speculative. Given such uncertainties, they believe that the initial formaldehyde levels were probably at the lower end of the extrapolated range because the furnishings had been installed some time before module A was occupied in April 1981. The General Services Administration’s (GSA) guidelines recommend that air handling systems in buildings be tested to determine if they are operating in accordance with specifications. Although the GSA guidelines in effect at the time called for test and balance certifications to be prepared before buildings were occupied, the Health and Human Services Regional Office Facilities Engineering Corps that was responsible for overseeing the building’s construction did not have the certification for module A signed until September 29, 1981, 5.5 months after employees moved into the module. NIEHS officials said that they adjusted the air handling system during the first 5.5 months in an effort to alleviate the employees’ discomfort. However, an April 27, 1982, memorandum from NIEHS’ Health and Safety Manager said that the air handling system could not have been in proper balance on September 29, 1981, as certified, because the agency continued to adjust the system to improve the air flow and the air exchange rate after the certification was signed. A time line summarizing the key events during the first several months of occupancy is in appendix I. The short-term effect of formaldehyde is irritation of the eyes and respiratory tract—in particular the nose and throat and, possibly, the lungs with concentrations as low as 0.41 ppm. Because formaldehyde changes quickly into other compounds when it contacts tissue, other body parts, by and large, are not adversely affected by inhaling formaldehyde. Surveys of the known research show there is no evidence that short-term exposure to formaldehyde affects the musculoskeletal, cardiovascular, immunological, neurological, reproductive, developmental, endocrine, renal, or hepatic systems of the human body, while only “a few . . . vague” gastrointestinal effects have been found. Moreover, the effects it has on the eyes and respiratory tract usually pass quickly once the exposure ends. Furthermore, predominant research results have found that people with asthma react no differently to formaldehyde exposure than do those without asthma. According to the National Institute for Occupational Safety and Health, short-term exposure to concentrations of 20 ppm of formaldehyde is immediately dangerous to the life and health of humans. Long-term exposure of laboratory animals to formaldehyde at a concentration of 2.0 ppm has not been shown to produce nasal cancer. But at concentrations of 14.1 to 14.3 ppm, studies have shown sharp increases in cancer of the animals’ nasal linings. Studies of long-term exposure have also shown that the occurrence of cancer increases as the concentration of formaldehyde increases. Even though it has not been unequivocally proven that long-term exposure to formaldehyde has the same effect on humans, the results of the tests on animals have raised concerns that it may affect humans. A number of epidemiological studies that examined the incidence of cancer in certain population groups have been done, primarily with groups that have had long-term occupational exposure to formaldehyde, such as morticians and pathologists. These studies have not produced clear evidence that long-term low-level exposure can cause cancer in humans. While many studies have found no or uncertain correlation between formaldehyde and cancer, others have found that the incidence of some cancers increases from exposure to formaldehyde. However, all of the studies that have shown an association had methodological shortcomings. According to The Toxicological Profile for Formaldehyde, “The overall conclusion to be drawn from these and other studies is that there is not a firm relationship between formaldehyde and the induction of cancers in humans.” The three agencies that are responsible for determining whether substances should be categorized as carcinogens—that is, as cancer-causing substances—have placed formaldehyde in an intermediate classification because of the clear evidence that formaldehyde causes cancer in the nasal linings of laboratory animals and the limited evidence from the epidemiological studies of humans. The agencies and their classifications of the effects of formaldehyde on humans are as follows: International Agency for Research on Cancer: Probably carcinogenic to humans; National Toxicology Program: Reasonably anticipated to be a carcinogen; EPA: Probable human carcinogen. EPA did a risk assessment of formaldehyde in 1987 and updated the assessment in 1991. The overall result of the update was that EPA reduced the estimated risk of cancer for humans by a factor of 50 (i.e., EPA decided that the risk of cancer from formaldehyde was not as great as it had originally thought). Much of this reduction occurred because of a change in the way that EPA estimated the effects of exposure to formaldehyde. The earlier method measured the concentration of formaldehyde in the air being breathed, whereas the current method uses a more direct measure of the way that formaldehyde affects tissues. This method estimates the levels of formaldehyde at the site where it most often comes in contact with tissue, such as the nasal lining, by measuring the compounds in the tissue that were produced by the exposure to formaldehyde. In discussing this change, EPA explained that it was desirable to have a complete biological understanding of how cancers were caused by a substance and that this change in method recognized a significant step in that direction. However, because it was not yet completely understood whether or how cancers in humans might be caused by formaldehyde, it was still necessary to extrapolate the risks to humans based on data from animal studies. EPA’s decision was significantly influenced by the fact that formaldehyde has been clearly shown to be genotoxic—that is, it causes various kinds of chemical damage and mutations to genetic material—in laboratory microorganisms, tissue culture tests, and some animal tests, which makes it particularly suspected of being a carcinogen. NIEHS’ current management is more aware of the need to have adequate air handling systems in buildings and to better monitor indoor air levels to reduce employees’ exposure to indoor air pollutants. For example, before moving into a recently completed laboratory module at Research Triangle Park, NIEHS initiated a number of health and safety measures to ensure the quality of the module’s indoor air, including improved air handling and monitoring measures and the use of less polluting building materials and furnishings. In addition, according to EPA officials, the manufacturing standards for building materials and office furnishings are more stringent today to ensure that the off-gassing levels of chemicals, such as formaldehyde, are much lower than in past years. NIEHS completed the new module at its Research Triangle Park facility in August 1996. According to NIEHS officials, the project engineer was responsible for keeping track of the building materials used in the construction and furnishing of the module and for ensuring that the materials did not contain excessive levels of pollutants, such as formaldehyde, that would cause indoor air quality problems. NIEHS officials also said that they ensured that the air handling system installed would meet the air exchange rate for the new laboratory space (i.e., 100-percent exchange) recommended by the American Society of Heating, Refrigeration, and Air-Conditioning Engineers, Inc. Before the new module was occupied by employees in 1996, NIEHS conducted several air monitoring tests of all areas of the building to ensure that the air handling system was functioning properly and that any off-gassing of pollutants from the building materials and furnishings was below OSHA’s standards. Even after employees moved into the new module, NIEHS’ Health and Safety Branch continued to perform some air monitoring to ensure that air quality problems did not occur. According to NIEHS officials, these improvements have reduced the number of complaints from employees about the air quality in their work space. NIEHS’ air monitoring procedures for existing space have also changed since the indoor air quality problems occurred in 1981. According to NIEHS officials, current procedures require the Health and Safety Branch to perform an indoor air quality assessment whenever an employee complains about the air flow or air quality, whenever renovations to an area result in the use of new building materials or furnishings, or whenever the building management staff suspects that the air flow or air quality may not be correct. Furthermore, according to NIEHS officials, the air exchange rate recommended by the American Society of Heating, Refrigeration, and Air-Conditioning Engineers, Inc., for administrative space (i.e., 20 cubic feet per minute) is currently being used for the older modules A and B at the facility. Also, according to NIEHS officials, some adjustments, in addition to those done as part of routine maintenance, are still being made today as the agency responds to complaints about the indoor air. The officials said they believe that the continued complaints are the result of employees’ heightened awareness of indoor air pollution and not to formaldehyde off-gassing. According to EPA officials, the manufacturing standards for building materials and office furnishings are more stringent today than they were in 1981 to reduce the off-gassing of chemicals such as formaldehyde. As federal agencies became more aware of indoor air pollution problems in the early 1980s, EPA and other agencies worked with the industries that make many of the materials used in office spaces—such as furniture, particleboard and wallboard, and carpet—to reduce the amount of chemicals used in the production of their products. Manufacturers have met these new standards by using less formaldehyde in their products and by using other materials to encase products that contain high levels of pollutants to prevent the off-gassing of these chemicals. In some instances, manufacturers suggest that their products be aired out before they are installed in an office building or that the building be aired out before it is occupied. We provided copies of a draft of this report to the National Institute of Environmental Health Sciences (NIEHS) for review and comment. The agency generally agreed with the information presented but took exception to the section dealing with the mathematical extrapolation showing the probable range of formaldehyde levels when employees first moved into the new building. The agency does not believe that it is possible to accurately model what the formaldehyde levels were in April 1981 because of the multiple variables that could have affected the levels and the lack of reliable information from 1981. While we agree that there are many uncertainties that make modeling formaldehyde levels in April 1981 difficult, enough is known about the various factors to do a simple mathematical extrapolation along a decay curve to show that the possible readings would have been higher than those measured in September 1981. For example, factors such as the type of materials in the building did not change significantly, and the air exchange rate in September should have been higher than in April. These, as well as other physical factors, point to the concentrations of formaldehyde being higher in April than in September 1981, but since monitoring was not done in April, there is no way of knowing exactly how much higher. All of the agency officials we spoke with from NIEHS and EPA agreed that the levels of formaldehyde at the facility in April were higher than in September. Opinions differed, however, as to how much higher the levels were, but there was general agreement that they were likely to have been no higher than 2.0 ppm. NIEHS stated that the initial levels were probably below 2.0 ppm because higher exposures would have caused significant eye irritation in most people and most employees first occupying the space were able to tolerate their indoor environment. We added NIEHS’ views as appropriate. Appendix II contains the full text of the agency’s written comments. Our review included interviews with NIEHS officials, current and former NIEHS employees, and scientists and experts knowledgeable about modeling, air handling, air monitoring, and the exposure to and the effects of formaldehyde. We also reviewed available documentation and air monitoring data compiled by NIEHS from September 1981 through March 1982. Because no air quality measurements were taken in the new NIEHS facility during the first 5.5 months that it was occupied, we relied on extrapolations and interviews to determine the most likely quality of the air inside module A when it was first occupied. We asked an EPA scientist, who was identified by the agency as an indoor air expert, to use NIEHS’ air monitoring data from September 28, 1981, through March 1, 1982, to extrapolate the formaldehyde levels when employees first moved into module A. To identify the available research on the health effects of formaldehyde, we reviewed The Toxic Profile for Formaldehyde (the September 1997 peer-reviewed draft) prepared by the Department of Health and Human Services’ Agency for Toxic Substances and Disease Registry. We also reviewed the April 1987 Assessment of Health Risks to Garment Workers and Certain Home Residents From Exposure to Formaldehyde, prepared by EPA’s Office of Pesticides and Toxic Substances, and the June 1991 update, Formaldehyde Risk Assessment, prepared by EPA’s Office of Toxic Substances. We also reviewed other technical literature on the health effects of formaldehyde. We performed our work from October 1997 through January 1998 in accordance with generally accepted government auditing standards. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will send copies to the appropriate congressional committees; the Secretary, Department of Health and Human Services; and the Director, Office of Management and Budget. We will also make copies available to others on request. Please call me at (202) 512-6111 if you or your staff have any questions. Major contributors to this report are listed in appendix III. upeo noed opossbe lino a quali5/3 E m poee agan went to hopa; 4/20 E mpoee went to hopa4/11 M oed no new buildng 9/29 Teand baane eae gned (n baane) 9/28 - 29 Fa am pe (.1ppm and .34ppm) 2/11 W oe ompenaonomiled b empoee 4/27 Mem oom heah and ae egadng a handling non peebaane n Sepem be 1981 1/20 - 21 A ampe (aeage of .04ppm) 3/25 - 26 Soue deemnedo beunue 2/25 - 3/1 A am pe (upo .044ppm) William F. McGee, Assistant Director Joseph L. Turlington, Evaluator-in-Charge Richard A. Frankel, Technical Adviser Philip L. Bartholomew, Evaluator James B. Hayward, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the: (1) quality of air inside the National Institute of Environmental Health Sciences (NIEHS) building when it was occupied in 1981; (2) health effects associated with exposure to formaldehyde; and (3) current management practices at NIEHS for air handling and air monitoring. GAO noted that: (1) NIEHS does not have data showing what the air quality was inside its new facility during the first 5.5 months that the building was occupied; (2) however, in response to some employees' concerns, the agency began monitoring the air in September 1981; (3) the agency found that formaldehyde levels ranged from 0.1 to 0.34 parts per million (ppm), well below the Occupational Safety and Health Administration's safety standard in effect in 1981; (4) officials of the NIEHS said that during the first 5.5 months, they made adjustments to the air handling system to balance the air flow and introduce more outside air to help alleviate the respiratory problems that some employees were experiencing; (5) formaldehyde is a known irritant; (6) short-term exposure to formaldehyde at concentrations as low as 0.41 ppm can irritate the eyes and the respiratory tract; (7) such effects usually pass quickly, however once exposure ends; (8) according to the National Institute for Occupational Safety and Health, short-term exposure to very high concentrations of formaldehyde at levels of 14.1 to 14.3 ppm has produced cancer in the nasal passages of laboratory animals; (9) because it is carcinogenic in animals and is known to damage genetic material in cell cultures, formaldehyde has been classified as a probable human carcinogen; (10) however, examination of epidemiological evidence has not demonstrated a firm relationship between formaldehyde and cancer in humans; (11) the NIEHS' current managers are more aware of the need for adequate air handling systems in buildings and for routinely monitoring indoor air levels to protect employees from exposure to indoor air pollutants than managers were in 1981; (12) for example, prior to a recent move into a new laboratory module at Research Triangle Park, the agency took a number of steps to ensure the quality of the building's indoor air, including improved air handling and monitoring measures; and (13) the manufacturing standards for building materials and office furnishings are more stringent today to ensure that the off-gassing levels of chemicals such as formaldehyde are much lower than in the past years. |
Part of the Mariana Islands Archipelago, the CNMI is a chain of 14 islands in the western Pacific Ocean—just north of Guam and about 3,200 miles west of Hawaii (see fig. 1). The CNMI had a total population of 53,890, according to preliminary results of the CNMI’s 2016 Household, Income, and Expenditures survey. Almost 90 percent of the population (48,200) resided on the island of Saipan, with an additional 6 percent (3,056) on the island of Tinian and 5 percent (2,635) on the island of Rota. The United States took control of the Northern Mariana Islands from Japan during the latter part of World War II. After the war, the U.S. Congress approved a trusteeship agreement making the United States responsible to the United Nations for the administration of the islands. In 1976, the District of the Mariana Islands entered into the Covenant with the United States establishing the island territory’s status as a self- governing commonwealth in political union with the United States. This Covenant grants the CNMI the right of self-governance over internal affairs and grants the United States complete responsibility and authority for matters relating to foreign affairs and defense affecting the CNMI. The Covenant initially made many federal laws applicable to the CNMI, including laws that provide federal services and financial assistance programs. However, the Covenant preserved the CNMI’s exemption from certain federal laws that had previously been inapplicable to the Trust Territory of the Pacific Islands, including certain federal minimum wage provisions and immigration laws, with certain limited exceptions. Under the terms of the Covenant, the federal government has the right to apply federal law in these exempted areas without the consent of the CNMI government. Section 902 of the Covenant provides that the U.S. and CNMI governments will designate special representatives to meet and consider in good faith issues that affect their relationship and to make a report and recommendations. These intermittent discussions between the United States and the CNMI are commonly referred as 902 Consultations. Several U.S. government programs operate in the CNMI, including programs administered by DHS, DOI, and DOL. DHS has three primary components—U.S. Customs and Border Protection (CBP), U.S. Immigration and Customs Enforcement (ICE), and U.S. Citizenship and Immigration Services (USCIS)—that enforce federal immigration laws and maintain border control in the CNMI. CBP inspects travelers at the Saipan and Rota airports and seaports to determine whether to admit them into the CNMI. ICE enforces federal immigration laws in the CNMI, for example, by identifying, apprehending, detaining, and removing criminal foreign nationals and other foreign nationals that threaten the security of the CNMI and the United States. USCIS processes foreign nationals’ applications for immigration benefits, that is, the ability to live, and in some cases work, in the CNMI permanently or temporarily. DOI’s Office of Insular Affairs coordinates federal policies and provides technical and financial assistance to the CNMI. The Covenant requires DOI to consult regularly with the CNMI on all matters affecting the relationship between the U.S. government and the islands. In May 2016, President Obama designated the Assistant Secretary for Insular Affairs as the Special Representative for the United States for the 902 Consultations, a process initiated at the request of the Governor of the CNMI to discuss and make recommendations to Congress on immigration and labor matters affecting the growth potential of the CNMI economy, among other topics. The 902 Consultations resulted in a report to the President in January 2017, which we refer to as the 902 Report. DOL requires employers to fully test the labor market for U.S. workers to ensure that U.S. workers are not adversely affected by the hiring of nonimmigrant and immigrant workers, except where not required by law. DOL also provides grants to the CNMI government supporting Adult, Dislocated Worker, and Youth programs, which include job search assistance, career counseling, and job training. From 1999 through 2015, DOL provided such grants under the Workforce Investment Act of 1998 (WIA) and the Workforce Innovation and Opportunity Act of 2014 (WIOA). In 2007, the minimum wage provisions of the Fair Labor Standards Act of 1938 were applied to the CNMI, requiring the minimum wage in the CNMI to rise incrementally to the federal level in a series of scheduled increases. In July 2007, the CNMI minimum wage increased from $3.05 to $3.55 per hour and then increased by $0.50 annually thereafter. A 2010 law delayed the scheduled minimum wage increase for 1 year, providing for no increase in 2011. On September 30, 2012, the scheduled annual increase raised the CNMI minimum wage to $5.55 per hour. In September 2013, additional legislation canceled the scheduled 2013 and 2015 annual increases. Under current law, the next minimum wage increase will occur on September 30, 2017, and the CNMI will reach the current U.S. minimum wage on September 30, 2018 (see table 1). If the original 2007 law increasing the minimum wage had not been subsequently amended, the minimum wage in the CNMI would have reached the U.S. minimum wage in May 2015. In 2008, the Consolidated Natural Resources Act of 2008 amended the U.S.–CNMI Covenant to apply federal immigration law to the CNMI, following a transition period. Among other things, the act includes several provisions affecting foreign workers during the transition period. To provide for an orderly transition from the CNMI immigration system to the U.S. federal immigration system under the immigration laws of the United States, on September 7, 2011, DHS established, and currently administers, the CW permit program. Under this program, foreign workers are able to obtain, through their employers, nonimmigrant CW-1 status that allows them to work in the CNMI. Dependents of CW-1 nonimmigrants (spouses and minor children) are eligible for dependent of a CNMI-Only transitional worker (CW-2) status, which derives from and depends on the CW-1 worker’s status. In accordance with the Consolidated Natural Resources Act of 2008, DHS, through USCIS, has annually reduced the number of CW-1 permits, and is required to do so until the number reaches zero by the end of a transition period. Since 2011, DHS has annually determined the numerical limitation, terms, and conditions of the CW-1 permits (see table 2). The act was amended in December 2014 to extend the transition period until December 31, 2019, and eliminate the Secretary of Labor’s authority to provide for future extensions of the CW program. In April 2010, DOI recommended that Congress consider new legislation permitting guest workers who have lawfully resided in the CNMI for a minimum of 5 years—which DOI estimated at 15,816 individuals—to apply for long-term resident status under the Immigration and Nationality Act. DOI recommended that Congress consider new legislation allowing these workers to apply for one of the following: (1) U.S. citizenship; (2) permanent resident status leading to U.S. citizenship (per the normal provisions of the Immigration and Nationality Act relating to naturalization), with the 5-year minimum residence spent anywhere in the United States or its territories; or (3) permanent resident status leading to U.S. citizenship, with the 5-year minimum residence spent in the CNMI. Additionally, DOI noted that under U.S. immigration law, special status is provided to individuals who are citizens of the freely associated states (Federated States of Micronesia, Republic of the Marshall Islands, and Republic of Palau). Following this model, DOI suggested that new legislation could grant foreign workers a nonimmigrant status, like that negotiated for citizens of the freely associated states, and could allow them to live and work either in the United States and its territories or in the CNMI only. In 2013, the U.S. Senate passed legislation that would have, among other things, established a CNMI-only permanent resident status for aliens who resided in the CNMI as guest workers under CNMI immigration law for at least 5 years before May 8, 2008, and are presently residents under CW-1 status. However, this bill never became law. During the expansion of the CNMI garment and tourism industries prior to 1995, the CNMI economy became dependent on foreign labor, as the CNMI government used its authority over its own immigration policy to bring in large numbers of foreign workers and investors. Consequently, from 1980 to 2000, the CNMI population grew rapidly, but the U.S. citizen share of the population fell to less than half of the CNMI population. Since 2000, the percentage of the CNMI’s population not made up of U.S. citizens or nationals has decreased, from about 56 percent to about 43 percent as of the 2010 decennial census (see fig. 2). Although the garment industry was able to flourish in the CNMI by exporting products to other parts of the United States largely unconstrained by import quotas and duties, several developments in international trade caused the industry to decline dramatically. In January 2005, in accordance with a World Trade Organization 10-year phase-out agreement, the United States eliminated quotas on textile and apparel imports from other textile-producing countries, exposing the CNMI apparel industry’s shipments to the United States to greater competition. Subsequently, the value of CNMI textile exports to the United States dropped from a peak of $1.1 billion in 1998 to $677 million in 2005 and to close to zero in 2010. After a decade of decline, by 2009, almost all of the garment factories had closed, and the CNMI was in its sixth year of a contracting economy and shrinking GDP. As figure 3 shows, a large part of the GDP decline from 2002 to 2009 reflected the declining garment industry, which dominated the CNMI’s manufacturing sector. Since 1990, the CNMI’s tourism market has experienced considerable fluctuation, as shown by the total annual number of visitor arrivals (see fig. 4). Total visitor arrivals to the CNMI dropped from a peak of 726,690 in fiscal year 1997 to a low of 338,106 in 2011, a 53 percent decline. Since 2011, however, visitor arrivals have increased by 48 percent, reaching 501,489 in fiscal year 2016. Data from the Marianas Visitors Authority show that the downward trend in Japanese arrivals from 2013 to 2016 was offset by the growth in arrivals from China and South Korea. While eligible Japanese and South Korean visitors enter the CNMI under the U.S. visa waiver program, Chinese visitors are not eligible and are permitted to be temporarily present in the CNMI under DHS’s discretionary parole authority, according to DHS officials. DHS exercises parole authority to allow, on a case-by-case basis, eligible nationals of China to enter the CNMI temporarily as tourists when there is significant public benefit, according to DHS data. From fiscal years 2011 to 2016 the percentage of travelers that arrived at the Saipan airport and were granted discretionary parole increased from about 20 percent to about 50 percent of the total travelers allowed to enter, according to our analysis of CBP data. According to CNMI tax data, overall employment in the CNMI increased by about 8 percent from 2013 through 2015. Foreign workers remain the majority of employed workers in the CNMI. From 2007 to 2015, inflation adjusted average earnings for those who maintained employment in the CNMI also rose by 18 percent. We estimate that approximately 62 percent (15,818 of 25,657) of the CNMI’s wage workers in 2014, assuming they maintained employment, would have been directly affected by the federally mandated 2016 wage increase, which raised CNMI’s minimum wage from $6.05 to $6.55 per hour. Following consecutive annual decreases in the number of employed workers from 2005 to 2013, as garment factory employment numbers fell to zero, CNMI employment started recovering after 2013, according to CNMI tax data. Figure 5 shows the number of employed workers and the number of foreign and domestic workers in the CNMI from 2001 to 2015 based on CNMI tax data. As the figure shows, the number of employed workers increased from the lowest point in 2013 by approximately 8 percent by 2015 (from 23,344 to 25,307). However, the number employed in 2015 (25,307) was still approximately 31 percent less than the number employed in 2007 (36,524). Although the number and percentage of foreign workers have fallen since 2001, foreign workers are still the majority of the CNMI workforce. Of the 25,307 workers in the CNMI in 2015, slightly over half (12,784) were foreign workers, according to CNMI tax data. The number of foreign workers fell from a peak of over 38,500 in 2002 (roughly 76 percent of the employed workers) and was under 13,000 in 2015. In contrast, since 2002, the number of domestic workers has fluctuated year to year, ranging from about 10,500 to about 13,500, but increased by 17 percent from 2013 to 2015. Foreign workers make up a large percentage of certain CNMI industries and occupations. Among industries and occupations with the largest number of CNMI workers, construction and accommodation or food services—or hospitality—had the highest percentage of foreign workers, with 80 percent or more non-U.S. workers, according to data from the CNMI’s 2014 Prevailing Wage Study. In contrast, the public administration industry has the lowest percentage of foreign workers, with about 22 percent. See appendix III for more details. Inflation adjusted average earnings for those in the CNMI who maintained employment rose by 18 percent from 2007 to 2015. The increase is attributable to a large increase in earnings from 2012 to 2013, when inflation adjusted average earnings increased from $14,476 to $16,221. Using 2015 prices, the inflation adjusted minimum wage rose by 54 percent from its lowest point in 2006 ($3.93) to its highest point in 2015 ($6.05). (See fig. 6.) According to our analysis of the CNMI’s Department of Commerce data, a majority of CNMI workers made the minimum wage in 2016. On September 30, 2016, the CNMI’s minimum wage increased from $6.05 to $6.55 per hour. We estimate that approximately 62 percent (15,818 of 25,657) of the CNMI’s wage workers in 2014, assuming they maintained employment, would have been directly affected by the federally mandated 2016 wage increase. Since 72 percent of the total foreign workers made less than or equal to $6.55 per hour in 2014, they were more likely to have been directly affected by the 2016 wage increase than domestic workers, with only 41 percent making less than or equal to $6.55. As the minimum wage continues to increase in the CNMI, a growing percentage of wage workers will be directly affected. By the time the minimum wage reaches $7.25 in 2018, approximately 68 percent of CNMI’s wage workers will be directly affected (see table 3). Our analysis also shows that roughly 80 percent of all jobs in the hospitality and construction industries were directly affected by the 2016 increase in the minimum wage. By 2018, approximately 85 percent of all jobs in these two industries will be directly affected by the scheduled minimum wage increase to $7.25. See appendix IV for more details. Some employers we contacted reported that the scheduled increase in the CNMI minimum wage from $6.55 per hour to $7.25 per hour in October 2018 would have little or no impact because they already pay equal or higher wages or offer other benefits. One employer reported that increasing the minimum wage would be good for the economy. Another employer stated that a higher minimum wage would attract more domestic workers to work in the CNMI who would otherwise take jobs in Guam or the U.S. mainland, where wages are higher. Other employers we interviewed expressed concerns about minimum wage hikes because of possible decreases in profits that could require them to downsize. One employer estimated that its costs would increase by almost $1 million annually the next time the minimum wage increases. Another employer told us that when the minimum wage increases, all wage workers—including those making higher wages—receive pay increases, causing profits to further decline. The Hotel Association of the Northern Mariana Islands also expressed concern about the CNMI bill to raise the minimum wage so soon after the September 2016 federally mandated increase, stating that the increase would be difficult to absorb, particularly for small businesses. In 2016, the CNMI legislature considered but did not adopt a bill to raise the local minimum wage to the federal minimum wage of $7.25 per hour 2 years earlier than would occur under the federal schedule of increases. To help attract a U.S. workforce and provide more income for families in the CNMI, the CNMI Governor and leading civic business organizations supported the bill, according to the 902 Report. The results of an October 2016 membership survey conducted by the Saipan Chamber of Commerce revealed that only 13 out of 36 Saipan businesses responding to the survey did not support this bill. We analyzed the economic impact of removing all foreign workers with CW-1 permits (or CW-1 workers) from the CNMI’s economy using the most recent GDP information available from calendar year 2015. We determined that the CNMI’s 2015 GDP would decline by 26 to 62 percent with no CW-1 workers, depending on the assumptions made. Demand for CW-1 workers in the CNMI exceeded the available number of CW-1 permits in 2016, while planned hotels, casinos, and other infrastructure projects estimate needing thousands of new employees. The existing CW-1 permits for foreign workers and the local supply of U.S. workers are insufficient to meet this estimated future demand. CNMI employers face multiple challenges in recruiting and retaining U.S. workers, according to several CNMI employers that participated in our discussion groups and semistructured interviews. If all CW-1 workers, or 45 percent of the total workers in 2015, were removed from the CNMI’s labor market, we project a 26 to 62 percent reduction in the CNMI’s 2015 GDP, depending on the assumptions made. To estimate the possible effect of a reduction in the number of workers with CW-1 permits in the CNMI to zero—through the scheduled end of the CW program in 2019—we employed an economic method that enabled us to simulate the effect of a reduction under a number of different assumptions. Available data suggest that predictions based on our model are consistent with the experience of the CNMI during and after the departure of the garment industry, from 2002 through 2015, which also saw a large drop in the number of foreign workers. Predicting the economic effect of a reduction of CW-1 workers is more challenging than predicting the effect of a reduction of domestic workers in general because of several sources of uncertainty, including that (1) the two groups’ economic effects may vary (i.e., they may serve in different types of jobs) and (2) the numbers of domestic workers who enter the CNMI labor market may vary. From 2013 to 2015, the number of domestic workers in the CNMI increased by almost 20 percent. Given these and other uncertainties, we simulated the effects of a reduction in the number of CW-1 workers that allowed us to vary assumptions based on economic literature as well as available data on the CNMI economy. In some simulations, for example, we assumed that domestic and CW-1 workers were perfect substitutes—meaning that domestic workers could easily replace foreign workers in production. In other simulations, we assumed that domestic and CW-1 workers were complements, implying that domestic workers would become less productive as the number of CW-1 workers fell. Our analysis assumed that 45 percent of the workforce was made up of CW-1 workers, based on a combination of CNMI tax data and the CNMI’s 2014 Prevailing Wage Study data. The CNMI’s actual 2015 GDP—the most recent year for which GDP data were available—was $922 million. To understand the economic impact of ending the CW program, we analyzed how removing all CW-1 workers would have changed the CNMI’s actual 2015 GDP. Our economic model and the results of 10,000 simulations show that had there been no CW-1 workers in 2015, there is a 25 percent likelihood that the CNMI’s 2015 GDP would have ranged from $583 million to $680 million, which is 26 to 37 percent lower than the actual value; 50 percent likelihood that it would have ranged from $462 million to $583 million, which is 37 to 50 percent lower than the actual value; and 25 percent likelihood that it would have ranged from $353 million to $462 million, which is 50 to 62 percent lower than the actual value (see fig. 7). Across the full range of probable outcomes, the elimination of the CW program would result in a 26 to 62 percent decline in the CNMI’s 2015 GDP, a relatively large negative effect on the economy. In a separate retrospective economic analysis, using past CNMI GDP and employment data, from 2002 to 2015, we estimated that a 10 percent decline in the number of workers during this period was associated with an 8.3 percent decline in the size of the economy, on average. Applying this factor to an analysis of the CNMI’s current economic situation suggests that a reduction in the number of foreign workers with CW-1 permits to zero—which would be equivalent to reducing the number of total workers by 45 percent, all else unchanged—would lead to a 37 percent contraction in the size of the CNMI economy as measured by GDP. This finding is within the range presented in the simulation model above. The CNMI government stated in the 902 Report that ending the CW program in 2019 would cripple the CNMI economy and dramatically derail economic development. Employers we contacted generally expect the planned termination of the CW program in 2019 to have negative effects. In all four facilitated discussion groups and five of the seven semistructured interviews, employers reported that the planned termination of the CW permit program would have negative effects. A participant in one discussion group reported that he expects his gross business sales to decline by 30 percent. The owner of the only airline that transports passengers to and from Saipan, Tinian, and Rota told us that 12 of his employees will be affected by the 2017 CW-1 cap next summer. Without them, he did not think he would be able to remain in business. Some employers told us that they already have been negatively affected when they were unable to renew CW-1 worker permits. In all four facilitated discussion groups and six of seven semistructured interviews, at least one employer reported that reaching the CW-1 cap in fiscal years 2016 and 2017 had negative effects. For example, in one discussion group, a small business employer reported having lost its only videographer because he was unable to obtain a CW-1 permit, resulting in lost sales. A large employer reported that it would not be able to open at full capacity after learning that 40 workers in one of its units would be affected by the CW-1 cap for fiscal year 2017. The employer also reported having spent more than $30,000 to purchase flight tickets home for 18 CW-1 workers when their permits expired, as well as $20,000 to apply for H-1B visas for some employees. A large employer with which we met individually reported that the biggest hardship it faced was at its restaurants. The employer had to close one of its restaurants for 2 months because of the departure of CW-1 workers unable to renew their permits. A participant in our discussion group with Tinian employers reported that several farms had closed on the island because of the lack of CW-1 workers, requiring costly food imports. In addition, the employer reported that Tinian lost its only boat captain in July 2016 because of the cap on the number of CW-1 permits. As a result, all cargo to the island must arrive by plane. The owner of a construction company told us that six of his CW-1 workers were affected by the fiscal year 2017 cap, which will likely cause his business to pay penalties for not completing scheduled projects. He has already downsized his staff from 40 to 6 CW-1 workers. The CNMI economy currently is experiencing growing demand for workers, particularly among occupations in construction and hospitality. Since fiscal year 2013, demand for CW-1 permits has doubled, and in fiscal year 2016, demand exceeded the numerical limit (or cap) on approved CW-1 permits set by DHS. The number of approved CW-1 permits grew from 6,325 in fiscal year 2013 to 13,299 in fiscal year 2016. In 2016, when the cap was set at 12,999, DHS received enough petitions by May 6, 2016, to approve 13,299 CW-1 permits, reaching the cap 5 months prior to the end of the fiscal year. On October 14, 2016, 2 weeks into fiscal year 2017, DHS announced that it had received enough petitions to reach the CW-1 cap and would not accept requests for new fiscal year 2017 permits during the remaining 11 months. In interviews, some employers reported being surprised to learn that the cap had been reached when they sought renewals for existing CW-1 workers. See table 4 for the numerical limit of CW-1 permits and number of permits approved by fiscal year. Based on DHS data on approved CW-1 permits, by country of birth, occupation, and business, from fiscal years 2014 through 2016, the number of permits approved for Chinese nationals increased, the number of permits approved for construction workers increased, and a large number of CW-1 permits were approved for three new businesses. Chinese nationals. In 2016, DHS approved 4,844 CW-1 permits for Chinese workers, increasing from 1,230 in 2015 and 854 in 2014. This represents a change in the source countries of CW-1 workers, with the percentage of workers from the Philippines declining from 65 to 53 percent during this period, while the share from China rose from 9 to 36 percent (see table 5). Construction workers. In 2016, DHS approved 3,443 CW-1 permits for construction workers, increasing from 1,105 in 2015 and 194 in 2014 (see table 6). New businesses. In 2016, DHS approved 3,426 CW-1 permits for three construction businesses, representing 26 percent of all approved permits. Two of these businesses had not previously applied for CW-1 permits. The third business was new in 2015 and was granted only 62 CW-1 permits that year. A key factor in the additional demand for labor in 2016 was the construction of a new casino in Saipan. In August 2014, the CNMI government entered into a casino license agreement with a business to build a phased development project within 8 years with a minimum of 2,004 guest rooms and areas for gaming, food, beverage, retail, and entertainment, among other things. The total investment cost of the project was estimated at $3.14 billion (2014 dollars). The agreement required that construction of the initial gaming facility be completed no later than 36 months from the date of the license, or by August 2017. However, in April 2017, the business requested amendments to the license agreement, including an extension of the construction completion and operation start date for the initial gaming facility to August 31, 2018, which was agreed to by the CNMI government. The amendment to the license agreement notes that the business justified these amendments in part based on constraints contained in federal immigration laws in relation to the employment of qualified workers needed to build the facility. See figure 8 for photos showing the initial gaming facility’s development site in Saipan both before and during construction. The firms contracted to build the new casino under construction in Saipan have primarily employed Chinese workers. According to the CNMI government, while CNMI law and regulations require businesses operating in the CNMI to attempt to employ at least 30 percent U.S. workers, the casino operator and construction firms received an exemption from this requirement from the CNMI Department of Labor. The Consolidated Natural Resources Act of 2008 allows CNMI employers to petition for H-2 visas and bring temporary workers, such as construction workers, to the CNMI without counting against the numerical restriction for such visas. However, China is not listed as an eligible country for H-2 visas. Amid the uncertainty of the future availability of foreign labor, the CNMI government has granted zoning permits to planned projects that will require thousands of additional workers. Twenty-two new development projects, including six new hotels or casinos in Saipan and two new hotels or casinos in Tinian, are planned for construction or renovation by 2019. Beyond the construction demand created by these projects, the CNMI’s Bureau of Environmental and Coastal Quality estimates that at least 8,124 employees will be needed to operate the new hotels and casinos. According to data provided by the Environmental Bureau, most of this planned labor demand is for development on the island of Tinian, where two businesses plan to build casino resorts, with an estimated labor demand of 6,359 workers for operations—more than twice the island’s population in 2016. According to the U.S. Department of the Treasury, the existing casino and hotel on Tinian closed in 2015 after having been fined $75 million by the department for violations of the Bank Secrecy Act of 1970. One of the two Tinian developments offers overseas immigration services, including assistance with obtaining employment or investment-based immigration to the United States. We observed a billboard advertisement in Tinian with Chinese writing indicating that by investing in a new development in Tinian, an investor’s family members would all get American green cards. This resort development, whose plans estimate a labor force of 859, has undertaken site preparation, while the other larger resort project, whose plans estimate a labor force of 5,500, had not initiated construction as of December 2016. Currently, the CNMI government does not have a planning agency or process to ensure that planned projects are aligned with the CNMI’s available labor force, according to CNMI officials. In January 2017, a bill was introduced in the CNMI Senate to establish an Office of Planning and Development within the Office of the Governor. The current number of unemployed domestic workers is insufficient to replace the existing CW-1 workers or to fill all the nonconstruction jobs that planned development projects are expected to create once their business operations commence. In 2016, 9,856 of the 13,299 CW-1 permits approved by DHS were allocated to workers engaged in nonconstruction-related occupations. When the CW program ends in 2019, available data show that the unemployed domestic workforce, estimated at 2,386 in 2016, will be well below the number of workers needed to replace currently employed CW-1 workers in nonconstruction-related occupations. In addition, the unemployed workforce would fall far short of the demand for additional workers in nonconstruction related occupations needed to support the ongoing operations of planned development projects—currently estimated at 8,124 workers by 2019. Narrowing this gap would require CNMI employers to recruit domestic residents present in the CNMI but not currently in the labor force. Key sources of additional labor force entrants to replace current CW-1 workers or fill new positions are as follows: High school or college graduates. In 2016, CNMI high schools graduated 678 students and the Northern Marianas College graduated 204 students. In addition, a smaller number of students leave high school or the college without a diploma and join the labor force. Domestic residents not in the CNMI labor force. According to the CNMI’s 2016 Health Survey, there are 9,272 U.S. citizens and permanent residents over the age of 16 who are not currently in the labor force. In addition to students, this group consists largely of homemakers, retired workers, seasonal workers in an off-season, the institutionalized, and those doing unpaid family work, according to the census. Overall, the survey found that labor force participation was lower for the population born in the CNMI (57 percent) compared with the overall population (69 percent). Other U.S.-eligible workers. Workers could be recruited from U.S. states, U.S. territories, and the freely associated states (Federated States of Micronesia, Republic of the Marshall Islands, and Republic of Palau). For example, in 2003, 1,909 freely associated state workers were employed in the CNMI as compared with 677 of these workers in 2015, according to CNMI tax data. Moreover, many citizens from the freely associated states migrate to the United States each year, including to nearby Guam. Guam and Hawaii, the closest U.S. areas to the CNMI, both have higher local minimum wages than the CNMI, currently at $8.25 and $9.25 per hour, respectively, according to DOL. Employers in the CNMI are required to attempt to recruit and hire U.S. workers. The CNMI government has a goal that all employers hire at least 30 percent U.S. workers, and employers are generally required to post all job openings to the CNMI Department of Labor’s website. However, the CNMI government can and has granted exemptions to this requirement. From May 8, 2015, to May 27, 2016, seven businesses were granted exemptions, according to data provided by the CNMI Department of Labor. In addition, all employers that apply for CW-1 permits must attest that no qualified U.S. worker is available for the job opening. At least one employer in all four facilitated employer discussion groups and six of seven employers in semistructured interviews reported on efforts to recruit U.S. workers. One employer told us that by collaborating closely with the local trade institute it had significantly reduced its reliance on foreign workers. Another employer attributed its successful recruitment efforts to an apprenticeship program offered by the CNMI’s Public School System. One employer explained how it had successfully developed a flexible work scheduling approach that allowed it to use part-time high school and college students to staff its facilities so that it had very few CW-1 workers. Several business owners told us that they made efforts to recruit workers from the U.S. mainland, Puerto Rico, or the freely associated states (Federated States of Micronesia, Republic of the Marshall Islands, and Republic of Palau). To identify potential staff, one employer told us that she had entered into a contract with a labor recruiter in Micronesia. However, other CNMI employers reported that they face the following challenges in recruiting and retaining U.S. citizens, among others: Unsatisfactory results of job postings. One employer told us that advertisements posted on the CNMI’s Department of Labor’s website yielded hundreds of unqualified applicants to which the employer must respond individually. An employer looking for workers with a specific license was sent dozens of résumés of applicants lacking such a license. Some of the forwarded résumés had not been updated in 10 years. Another employer continued to receive the résumés of employees who had previously been fired. High costs of recruitment. One employer that recruited nine workers from the U.S. mainland told us that relocation costs were very expensive. Representatives from one company spent more than $1 million to recruit and relocate 120 U.S. workers to Saipan, but half of them left after a typhoon in 2015, according to the 902 Report. Another large employer told us that costs of relocation to Saipan are extremely expensive and cannot be circumvented, particularly for middle management positions. High turnover. The geographical distance and remoteness of the CNMI from Hawaii and the U.S. mainland make it difficult to retain U.S. workers. One employer told us that the U.S. workers successfully recruited from California did not stay with the employer for 3 months because of the long distance from home, among other factors. The four local workers that another employer hired to replace one CW-1 worker did not maintain employment for more than 2 weeks each. Meanwhile, all of the local hires recruited through an apprenticeship program left within 2 years to take higher paying jobs with the CNMI government, according to another employer. A hotel employer told us the turnover rate among workers recruited locally is high because employers in Guam, Hawaii, and other off-island locations offer higher wages. The federal and CNMI governments support programs seeking to address the CNMI’s labor force challenges. These programs include the following: job training funded by employers’ CW-1 vocational education fees that DHS transfers to the CNMI government, CNMI scholarship programs funded by the CNMI government and local license fees for gaming machines, employment and training assistance funded by DOL, and technical assistance funded by DOI. In recent years, on average, DHS transferred about $1.8 million per year in CW-1 vocational education fees and DOL provided about $1.3 million per year to the CNMI for employment and training programs. Although scholarship entities in the CNMI provide an average of $1.5 million in financial assistance per year to recipients to attend institutions of higher education or vocational training, according to these entities, from 60 to 90 percent of these recipients default on the terms of their scholarship agreements. In addition, the recently completed 902 Consultations between the U.S. and CNMI governments resulted in several recommendations for congressional and executive actions that seek to alleviate immigration and labor force challenges faced by the CNMI. DHS collects the $150 vocational education fee assessed for each foreign worker on a CW-1 petition and typically transfers the fees to the CNMI government each month. To support vocational education curricula and program development, in fiscal years 2012 through 2016, DHS transferred to the CNMI Treasury about $9.1 million in CW-1 fees (see fig. 9). In fiscal years 2012 through 2016, the CNMI government allocated about $5.8 million of the $9.1 million in CW-1 vocational education fees to three educational institutions (see fig. 10). At present, the CW-1 fees support job training programs at Northern Marianas College and Northern Marianas Trades Institute and in recent years also funded job training provided by CNMI’s Public School System. All three institutions reported using a majority of the CW-1 fees to pay the salaries and benefits of faculty and staff members involved in job training programs. The CNMI legislature generally appropriates the CW-1 funding before it is allocated to recipient entities, according to the CNMI. Of the $9.1 million that DHS transferred to the CNMI during fiscal years 2012 to 2016, about $3.3 million remained available for programing in fiscal year 2017. Northern Marianas College. In fiscal years 2013 through 2016, the college, the CNMI’s only U.S.-accredited institution of higher learning, received $2.1 million in CW-1 funding and prepared annual reports describing how the funds were used to train the CNMI workforce for occupations in which foreign workers currently outnumber U.S. workers. According to the annual report for fiscal year 2016, the college used its CW-1 funding to provide vocational courses and services in business, nursing, community development, and information technology. The college reported using its CW-1 funding to support 457 students in the fall of 2015, 434 such students in the spring of 2016, and 228 students in the summer of 2016. In fiscal year 2016, 66 students graduated from the business and nursing programs and 33 of them found employment upon graduation, according to the college’s annual report. The college reported that CW-1 funding also supported training and services for 891 participants of community- based learning programs offered by the college’s Community Development Institute (programs such as continuing education, language training, and customized workforce training for public and private sector organizations). In addition, the college reported using CW-1 funding to conduct an information technology boot camp to prepare local workers for information technology and data management positions. Two boot camp sessions were conducted, training about 45 participants. Northern Marianas Trades Institute. In fiscal years 2014 through 2016, the institute—a private, nonprofit facility for vocational education established in 2008—received $1.7 million in CW-1 funding. The institute specializes in training youths and adults in construction, hospitality, and culinary trades. Training programs range from 4 months to 2 years, and students can earn certificates upon completion of a full course of study. The institute also helps students obtain internships and employment by establishing partnerships with private sector companies. For example, in 2014 the institute established an apprenticeship program with a large hotel in Saipan to enable its students to gain experience working in a restaurant kitchen and to improve their cooking skills. In return, the hotel provides instructors to the institute for its new culinary training facility, according to the human resources manager of the hotel. The institute’s senior officers told us that in fiscal year 2016, 300 students were enrolled in the institute’s fall, spring, and summer sessions, and as of November 2016, 132 of these students had found employment after completing their training. CNMI’s Public School System. In fiscal years 2012 through 2015, the Public School System—which consists of 20 public schools, including 5 high schools that graduated 662 students in the 2014– 2015 school year—received $2 million in CW-1 funds for its cooperative education program designed to prepare high school students for the CNMI’s job market. The program consisted of training taking place both on and off campus. Students completed résumés and applications and completed other professional development activities on campus and gained work experience as trainees in private sector organizations off campus. By the end of the 2014–2015 school year, 452 students were enrolled in the cooperative education program, according to the federal programs officer for the Public School System. According to the Public School System’s federal programs officer, in 2016 the Public School System’s cooperative education program was supported not by CW-1 vocational education fees but by a technical assistance grant provided by DOI. We facilitated group discussions with current and former students of the CW-1-funded programs at each of the three institutions. Several participants told us that the training had helped them find jobs. Participants also identified specific benefits of the training they received, such as increased familiarity with occupations they intended to enter, learning communication skills tailored for specific work environments, and maintaining and improving skills in a chosen career path. Job training supported by the CW-1 vocational education fees is generally available on Saipan and to a limited extent on Tinian and Rota. Northern Marianas College reported providing vocational activities on Tinian and Rota in fiscal years 2013, 2014, and 2015, but not in 2016. The college reported using CW-1 funding to help support 88 students on Tinian and 46 students on Rota in fiscal year 2015. Training offered by Northern Marianas Trades Institute is available only on Saipan. However, the employers we interviewed in the CNMI told us that the benefits of the job training programs supported by the CW-1 vocational education fees were limited to Saipan and that programs run by Northern Marianas College and Northern Marianas Trades Institute were unavailable on Tinian and Rota. For example, in our facilitated discussion with employers in Rota, none of the employers had experience with job training programs supported by CW-1 funding and some did not know about the Public School System’s cooperative education program. One employer in Rota told us that to obtain job training at Northern Marianas College, Rota residents must fly to Saipan and pay for their own travel and housing expenses in addition to tuition. Similarly, a Tinian employer told us that the vocational education fees he pays for CW-1 permits do not benefit Tinian, since training programs supported by the fees are only offered on Saipan. Performance or financial reporting of CW-1 fees has not always been available. We reported in September 2012 that according to DHS officials, the CW-1 fees transferred by DHS to the CNMI Treasury were not subject to DHS grant terms or conditions, such as performance or financial reporting requirements, and that the Consolidated Natural Resources Act of 2008, which authorized the CW-1 vocational education fee, did not direct DHS to impose any such requirements on the funds. In March 2016, the CNMI Department of Labor signed a memorandum of agreement with the two current recipients of the CW-1 funding, Northern Marianas College and Northern Marianas Trades Institute. As part of the memorandum, the CNMI’s Department of Labor, through its Secretary, was made responsible for the approval, use, and distribution of funds for job training programs provided by the college and the institute. The memorandum required each institution to submit an annual audit report to the CNMI Department of Labor for each fiscal year it receives CW-1 funding. The college has submitted an annual report about its use of CW- 1 funding each year since fiscal year 2013, when it first obtained such funding. Northern Marianas Trades Institute submitted its first report about the use of CW-1 funding in fiscal year 2016 in March 2017. The CNMI government and municipalities offer eligible residents the opportunity to apply for scholarship funds to help pay for higher education or vocational training offered either in the CNMI or elsewhere. Scholarship recipients can obtain financial assistance from either the CNMI Scholarship Office or the Saipan Higher Education Financial Assistance Office. Data provided to us by those offices show that approximately $3.1 million in financial assistance was provided to recipients in the CNMI to attend institutions of higher education and vocational training in 2016 (see fig. 11). CNMI Scholarship Office. In fiscal years 2014 through 2016, this office provided about $3.5 million to about 655 scholarship recipients each year to enroll in higher education or vocational training programs in priority fields such as science, technology, engineering, math, construction, hospitality, and nursing. Scholarship recipients must sign memorandums of agreement that require them to return to the CNMI within 3 months of graduating from or dropping out of the institution or program for which they are receiving financial assistance. After returning to the CNMI, the students must also provide services by working in the CNMI for a period equal to the period for which they received financial assistance. Data from the CNMI Scholarship Office show that in fiscal years 2014 through 2016 at least 58 percent of the recipients of scholarships from the CNMI Scholarship Office obtained education or training outside the CNMI. Approximately 4,440 of 7,400 of the office’s current and previous CNMI scholarship recipients, or 60 percent of all recipients since the inception of the program, have defaulted on the terms of their scholarship agreements, according to the scholarship administrator. Saipan Higher Education Financial Assistance Scholarship Office. In fiscal years 2014 through 2016, this office provided about $5.2 million in financial assistance to about 1,000 scholarship recipients each year, supported entirely by Saipan’s municipal local license fees for casino poker and other gaming machines. Scholarship recipients must obtain education or training on Saipan or off-island in priority fields of study, such as accounting, nursing, teaching, and hospitality, among others. Recipients must also return to the island within 3 months of graduation or nonenrollment and take jobs in the CNMI’s private or public sector. In fiscal years 2014 through 2016, data provided by the office show that around 40 percent of all scholarship recipients each year obtained education or training off-island. The office’s Administrator estimated that approximately 90 percent of the 2,759 students who have received financial aid scholarships since the program began have defaulted on the terms of their scholarship agreements, requiring debt repayment. In 2015, the office increased collections of outstanding debt by 55 percent over what it collected in 2014, according to the office’s 2015 annual report. From July 2012 through June 2016, DOL provided about $5.3 million in grants under the Workforce Investment Act of 1998 (WIA) and the Workforce Innovation and Opportunity Act of 2014 (WIOA) to the CNMI Department of Labor’s Workforce Investment Agency (see table 7). That agency carried out WIA programs in the CNMI and now administers programs under WIOA. DOL’s Employment and Training Administration conducts federal oversight of these programs. The CNMI developed a state plan outlining a 4-year workforce development strategy under WIOA and submitted its first plan by April 1, 2016. The plan and the WIOA performance measures took effect in July 2016. According to its state plan, the CNMI Department of Labor has formed a task force to assess approaches for using workforce programs to prepare CNMI residents for jobs that will be available because of ongoing reductions in the number of foreign workers and the eventual expiration of the CW program. Providers of DOL-funded worker training include Northern Marianas College, Northern Marianas Trades Institute, CNMI government agencies, and private businesses. Examples of training provided by these entities include courses toward certification as a phlebotomy technician, a nursing assistant, and a medical billing and coding specialist. Under the terms and conditions of DOL grants, the CNMI’s Workforce Investment Agency submitted quarterly and annual performance reports to DOL. The quarterly performance reports contained information on the number of program participants; the characteristics and demographics of these participants; and the services provided under the Adult, Dislocated Worker, and Youth programs, including job search assistance, career counseling, and occupational skills training. These programs were implemented on a program year basis, which for program year 2015 began on July 1, 2015, and ended on June 30, 2016. See app. IX for program year 2015 performance measures and negotiated and actual levels of performance reported by the CNMI Department of Labor. Table 7 presents annual data, as reported by the CNMI’s Workforce Investment Agency to DOL, on the number of individuals who received services under the Adult, Dislocated Worker, and Youth programs in each of the last 4 program years. It also presents funding data as reported by DOL. In September 2016, DOI approved a $200,000 grant to create a team of labor certification technicians and a statistician at the CNMI Department of Labor to help collect, compile, and analyze data on the CW program, according to DOI. Through the grant, DOI’s Office of Insular Affairs Technical Assistance Program, seeks to enable the CNMI to obtain information on job categories currently held by CW-1 workers and monitor losses and gains in particular job fields. Recognizing that the CNMI is in transition and working to build a stronger U.S.-citizen workforce, the grant is also meant to help the CNMI develop a strategic plan to provide real- time data on the most in-demand job fields and other information needed by decision makers for allocating training and workforce development resources. Under the grant’s terms, the CNMI government is expected deliver all required elements by the end of fiscal year 2018. On October 2, 2015, and again on January 4, 2016, the prior and current CNMI Governors sent letters to President Obama requesting that he initiate consultations under section 902 of the Covenant to consider two issues affecting the relationship of the CNMI with the federal government. The first issue involved immigration and labor matters affecting the growth potential of the CNMI economy, and the second issue concerned proposed and ongoing military activities in the CNMI. In May 2016, President Obama designated the DOI Assistant Secretary for Insular Areas as the Special Representative for the United States for 902 Consultations. The CNMI Governor was designated the Special Representative for the CNMI. In December 2016, after 8 months of official consultations, informal discussions, and site visits to locations in the CNMI, the Special Representatives transmitted a report to the President that included six recommendations on immigration and labor matters. These recommendations included proposals for legislative amendments, regulatory changes, or DHS actions. On January 17, 2017, the report was submitted to Congress, marking the first known time a 902 Report has been submitted to Congress since the U.S.–CNMI Covenant was fully implemented in 1986, according to DOI. The report’s recommendations were as follows: 1. Extending the CW program beyond 2019 and other amendments, such as raising the CW-1 cap and restoring the executive branch’s authority to extend the CW program. According to the report, Public Law 113-235 repealed the U.S. Secretary of Labor’s authority to extend the transition period beyond 2019. The report states that the CNMI seeks to extend the transition period by 10 years from December 31, 2019, to December 31, 2029; to allow the Secretary of Labor to grant a 5-year extension past this date; and to increase the numerical limit of CW-1 visas from 12,998 to 18,000 per fiscal year. The Special Representatives support an extension of the transition period, restoring extension authority, and raising the CW-1 cap. 2. Providing permanent status for long-term guest workers. The Special Representatives support congressional action to make long-term guest workers and their families with significant equities in the CNMI eligible for lawful permanent resident status with a path to citizenship. According to the report, it is the CNMI’s position that long-term guest workers, through their continued presence and contributions to the CNMI, are intertwined with the economic development and growth of the commonwealth. However, these individuals have no path to lawful permanent residence, according to the report. The report states that the CNMI would like to recognize their important contributions to a place many consider home, in some cases for more than 20 years, by offering them a path to lawful permanent residence. 3. Soliciting input on suggested regulatory changes to the CW program. According to the report, the CNMI’s position is that DHS’s “first-come, first-served” application system for CW-1 permits has resulted in the displacement of current and long-time CW-1 workers by new workers. In addition, long-time guest workers who have built families, homes, and lives in the CNMI are unprotected and are not given priority within the overall numerical allocation of CW-1 permits. The CNMI’s position is that its Department of Labor should have a role in determining what employers should be deemed eligible to sponsor foreign labor workers under the CW program. For these reasons, the CNMI suggested several regulatory changes that DHS could implement, such as prioritizing renewals of CW-1 permits over new CW-1 applications, establishing a separate numerical allocation for long-term CW-1 workers, and partnering with the CNMI on the distribution and allocation of available permits. The Special Representatives recommend that DHS publish a Request for Information to solicit input from a variety of parties on various regulatory changes, including those proposed by the CNMI. 4. Considering immigration policies to address regional labor shortages. According to the report, the CNMI believes that many of the newer CW-1 applications are for Chinese construction workers, and a CW system with a disproportionate allocation of permits for construction workers could hamper the development of its service sector. For this reason, according to the report, the CNMI calls for making Chinese nationals eligible for H-2B visas for work performed in the CNMI. In addition, because of the special needs of the region, the CNMI calls for amending U.S. immigration laws to create additional Guam or CNMI-Only nonimmigrant visa categories for which current law does not provide. Finally, because of its geographic distance from Hawaii and the U.S. continent, and its location in the Asia-Pacific region, the CNMI calls for new legislation to expand the current Guam and CNMI- Only Visa Waiver Program, which allows eligible visitors from designated countries to travel to the CNMI for business or pleasure for up to 45 days without standard federal visa documentation. The Special Representatives support Congress’s consideration of extending and expanding existing immigration policies or developing new policies to address systemic regional workforce challenges currently being experienced in both Guam and the CNMI. 5. Extending eligibility to the CNMI for additional federal workforce development programs. According to the report, unlike the CNMI, several U.S. states, the District of Columbia, the Commonwealth of Puerto Rico, Guam, and the U.S. Virgin Islands are eligible to receive grants to provide for the employment services authorized under the Wagner-Peyser Act. Extending this program to the CNMI (and American Samoa) would ensure that all the territories are treated equally, according to the CNMI. The Special Representatives recommend that DOI’s Office of Insular Affairs work cooperatively with DOL to extend the Wagner-Peyser Act to the CNMI. 6. Establishing a cooperative working relationship between DHS and the CNMI. According to the report, in recent years, the CNMI Department of Labor has filed Freedom of Information Act requests to obtain information regarding the approved CW-1 permit holders and would like an easier DHS process for obtaining data from USCIS on the CW program. According to the CNMI’s Secretary of Labor, the CNMI Department of Labor has a very good working relationship with DHS but could benefit from more coordination on data inquiries. The Special Representatives recommend that DHS and the CNMI work cooperatively to exchange information and continue existing efforts to educate employers about applying for alternative nonimmigrant visas in place of the CW-1 visa when appropriate. The U.S. Secretary of Homeland Security has the discretion under current law to implement the two recommendations directed at DHS, according to DHS’s Acting Deputy Chief Counsel. However, he noted that implementing the other recommendations could require enacting new legislation. DOI’s Office of Insular Affairs will, as appropriate, consult with DHS and Congress regarding implementation of the 902 Report recommendations, according to the Acting Assistant Secretary. Table 8 lists the Special Representatives’ six recommendations and summarizes proposed next steps toward implementing them that could be taken. On January 30, 2017, the House of Representatives passed The Northern Mariana Islands Economic Expansion Act (H.R. 339), which relates to recommendation 1. The bill, which has been referred to the Senate Committee on Energy and Natural Resources, was introduced by Congressman Sablan of the CNMI on January 5, 2017. The bill would amend Public Law 94-241 to increase the number of CW-1 permits to 15,000 in 2017 and, among other things, would exclude certain construction occupations from eligibility for new CW-1 permits and increase the CW-1 vocational education fee from $150 to $200. On April 27, 2017, the Senate Energy and Natural Resources Committee held a hearing to discuss this bill. We provided a draft of this report for review and comment to DOC, DHS, DOI, and DOL as well as to the CNMI government. We received technical comments from DOC, DHS, and DOL, which we incorporated as appropriate. We also received written comments from the CNMI Governor. In his letter the Governor stated that the report provides crucial data on the CNMI’s progress toward expanding the domestic workforce in line with the mandates of Public Law 110-229. He further stated that the report contains key implications for federal and Commonwealth policy makers to consider. The Governors letter is reprinted in appendix X. We are sending copies of this report to the appropriate congressional committees, the Governor of the CNMI, the Secretary of Commerce, the Secretary of Homeland Security, the Secretary of the Interior, the Secretary of Labor, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact David Gootnick at (202) 512-3149 or gootnickd@gao.gov or Oliver Richard at (202) 512-8424 or richardo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XI. Our objectives were to examine (1) changes in the Commonwealth of the Northern Mariana Islands’ (CNMI) labor market since the federally mandated minimum wage increases began, (2) the potential economic impact of reducing the number of foreign workers to zero, and (3) federal and CNMI efforts to address labor force challenges. For all three objectives, we obtained and analyzed agency data and documents and interviewed officials at the U.S. Departments of Commerce (DOC), Homeland Security (DHS), the Interior (DOI), and Labor (DOL) in Washington, D.C.; San Francisco, California; and Honolulu, Hawaii. In November and December 2016, we conducted fieldwork in Saipan, Tinian, and Rota, CNMI, where we interviewed the CNMI Governor, the Mayors of Tinian and Rota, and the CNMI Secretaries of Commerce, Finance, and Labor. We also conducted discussion groups and semistructured interviews with selected employers, CW-1 workers, U.S. workers currently employed by CNMI businesses, and students or graduates of the CNMI’s job training programs. We worked with organizations in the CNMI whose officials we had interviewed prior to our arrival in order to schedule the discussion group sessions, recruit session participants, and obtain space for holding the sessions. For example, we asked the Saipan Chamber of Commerce, Tinian Chamber of Commerce, and the Rota Department of Commerce to help us schedule discussion groups with representatives of various local businesses. Similarly, we asked the Hotel Association of the Northern Mariana Islands to help us schedule semistructured interviews with selected hotel executives. For background, we described the CNMI’s geography, its history, and its political relationship with the United States by reviewing U.S. and CNMI laws, DHS regulations and documents, previous GAO reports, the U.S. Census Bureau’s decennial census for the CNMI, and estimates of the CNMI’s gross domestic product (GDP) published by DOC’s Bureau of Economic Analysis. All GDP levels were adjusted for inflation to 2015 dollars. We also analyzed visitor arrival data gathered by the Marianas Visitor Authority from customs declarations forms for fiscal years 1990 through 2016, which we deemed sufficiently reliable for our purposes. We also obtained and analyzed DHS’s U.S. Customs and Border Protection (CBP) data on airport admissions at the Saipan and Rota airports from 2010 through 2016 to understand the number of travelers granted parole. Despite delays in CBP’s processing of the CNMI’s airport admissions data and CBP officers’ gradual adherence to new operational guidance, we determined that CBP data were reliable for our purposes by interviewing CBP data analysts and obtaining answers to our data reliability questions. Finally, we analyzed 2003–2015 data provided by the Hotel Association of the Northern Mariana Islands on hotel occupancy and room rates and adjusted those rates to 2015 prices based on inflation. To evaluate changes in the CNMI’s labor market since the federally mandated minimum wage increases began, we (1) analyzed overall employment data for all domestic and foreign workers in the CNMI, (2) estimated inflation adjusted average earnings, (3) determined the industries and occupations with the highest numbers of CNMI workers affected by the current and scheduled minimum wage increases, and (4) obtained employers’ and employees’ opinions about the minimum wage increases. To analyze overall employment in the CNMI, we relied on tax data provided by the CNMI’s Department of Finance for calendar years 2001 to 2015 for citizens and noncitizens. We defined citizens as domestic workers, or anyone that did not require a visa for employment in the CNMI, including citizens of the United States, the CNMI, and the freely associated states (Federated States of Micronesia, Republic of the Marshall Islands, and Republic of Palau). All noncitizens or nondomestic workers were classified as foreign workers. We used the same data source for three prior GAO reviews of minimum wage changes in the CNMI. We also interviewed CNMI officials who prepared the tax data to understand how the data were prepared and any limitations. We reviewed the data to determine consistency, identified fluctuations, and consulted with CNMI officials to determine possible explanations for fluctuations. We determined that the data were sufficiently reliable for our purposes. Undocumented foreign workers were excluded from the scope of our review. To estimate inflation adjusted average earnings, we also relied on CNMI tax data from 2003 through 2015, with counts of the number of individuals in different ranges of earnings. To calculate mean wages in the CNMI, we divided the total sum of earnings by the number of workers with non-zero wages per calendar year from 2003 through 2015. All dollar values were adjusted to 2015 prices based on the CNMI’s Consumer Price Index (CPI). Because the U.S. Bureau of Labor Statistics collects CPI data on the 50 U.S. states but not the CNMI, we relied on other sources of data to compare changes in earnings or wage rates to changes in prices. We obtained historical data on the CPI from the CNMI’s Department of Commerce. To produce an annual CPI series, we analyzed quarterly CPI data from the first quarter of 2003 to the fourth quarter of 2014 and averaged the four quarters in each year. To obtain the CPI for 2015, which was not available from the CNMI’s Department of Commerce, we followed a methodology used by DOC’s Bureau of Economic Analysis that applied Guam’s inflation rate to the CNMI. We also interviewed CNMI officials responsible for producing the quarterly CPI estimates to understand how the data were prepared and any limitations to the data and concluded that the CPI data were sufficiently reliable for the purposes of our review. To determine the industries and occupations with the highest numbers and percentages of CNMI foreign and domestic workers affected by the current and scheduled minimum wage increases, we analyzed the results of the CNMI’s 2014 Prevailing Wage Study; assumed all 2014 workers maintained employment; and projected the numbers and percentages of workers directly affected by the scheduled 2016, 2017, and 2018 wage increases. We restricted our analysis to scheduled minimum wage increases, not those that could happen under possible future legislation. We then determined the industries and occupations with the largest numbers of workers directly affected by current and scheduled wage increases by counting the number of hourly wage workers in 2014 who earned less than or equal to the level of 2016, 2017, and 2018 minimum wage by industry and occupation. We determined the data contained in the CNMI’s 2014 Prevailing Wage Study to be sufficiently reliable for the purposes of our review by interviewing CNMI officials who gathered and analyzed the original data and obtaining answers to our data reliability questions. To obtain local employers’ opinions about the minimum wage increase, we reviewed results of a survey conducted in 2016 by the Saipan Chamber of Commerce, gathered survey results, and interviewed board members of the Saipan Chamber of Commerce. We also facilitated four discussion groups and conducted seven semistructured interviews with representatives of 42 employers operating on Saipan, Tinian, and Rota in the CNMI. Discussion groups were held with representatives of selected businesses in Saipan, Tinian, and Rota, with groups ranging in size from 4 to 16 participants. Semistructured interviews were held with selected larger employers in Saipan. During discussion groups and semistructured interviews, we asked the same two questions about (1) the effect of and business response to the federal minimum wage increase to $6.55 in October 2016 and (2) the possible future effect of and business response to the federal minimum wage increase to $7.25 scheduled to occur in October 2018. We selected a nonprobability sample of employers for participation in our discussion groups and semistructured interviews. Therefore, views reported by participants in these groups may not be representative of those of all CNMI employers and for that reason are not generalizable. To evaluate the potential economic impact of reducing the number of foreign workers in the CNMI to zero and replacing them with domestic workers, we (1) developed an economic model that simulates how the CNMI’s GDP would change if the number of CNMI-Only transitional worker (CW-1) permits were reduced to zero, (2) evaluated the local supply and demand for labor, and (3) obtained local employers’ and employees’ opinions about the past and scheduled reduction in foreign workers. To develop an economic model that simulates how the CNMI’s GDP would change if the number of CW-1 permits were reduced to zero, we adapted an economic model that we used in a prior GAO report. The model relied on assumptions regarding the substitutability similarity of the domestic and foreign workforces. In this report, we also relied on reports produced by the Bureau of Economic Analysis on GDP in the CNMI from 2011 to 2015. We presented the change in GDP based from its 2015 value. We examined the possible range of effects from a reduction in foreign workers by calculating the effect on GDP under a range of assumptions. Specifically, we simulated the effect on GDP varying the assumptions regarding the ability of CNMI domestic workers to substitute for foreign workers and the effect of a reduction in labor (see app VI).To check this analysis, we developed a separate analysis that examined the relationship between GDP and the number of workers in the CNMI from 2002 to 2015. We found that the two factors were indeed related but that relationships differed by industry (see app V). To evaluate the local supply and demand for labor in the CNMI, we compared the annual number of CW-1 permits approved by DHS’s U.S. Citizens and Immigration Services (USCIS) to the annual numerical limit (or cap) of CW-1 permits set by USCIS from 2011 to 2016. Numbers of approved CW-1 permits are the number of beneficiaries approved for each fiscal year cap, provided by DHS’s Office of Performance and Quality, Performance Analysis and External Reporting Branch, based on Form I-129CW petitions for CNMI-Only nonimmigrant transitional workers, as of November 18, 2016. We analyzed the number of approved CW-1 permits by fiscal year (2012 through 2016), month, country of birth, occupation, and tax identification. We determined DHS/USCIS data to be sufficiently reliable for our purposes. We also analyzed the gap between the local supply and demand for labor in the CNMI by comparing the number of approved CW-1 workers to the potential number people seeking employment in 2016, based on a CNMI Behavioral Health Survey and the number of high school and college graduates in 2016, as provided by the CNMI’s Public School System. Additionally, we reviewed expected labor requirements outlined by the CNMI’s Bureau of Environmental and Coastal Quality as of December 20, 2016. To obtain local employers and employees’ opinions about the past and scheduled reductions in foreign workers, we conducted discussion groups and semistructured interviews with 42 CNMI employers and 70 workers. In the four discussion groups and seven semistructured interviews with employers, we asked the same two questions regarding reductions in foreign workers: 1. How did reaching the numerical cap of CW-1 permits in May for fiscal year 2016 and October for 2017 affect your business and how did you respond? 2. How do you think your business will be affected and will respond by the end of the CW permit program, scheduled for December 31, 2019? As indicated above, the discussion groups with selected employers ranged in size from 4 to 16 participants. We also conducted eight discussion groups with selected workers in Saipan. We conducted separate discussions with four groups of 41 workers who held CW-1 visas and four groups of 29 workers who were citizens or permanent residents of the United States or the freely associated states. Discussion groups with workers who held CW-1 visas ranged in size from 8 to 16 participants, while discussion groups with workers who were citizens or permanent residents of the U.S. mainland or territories ranged in size from 5 to 10 participants. In all discussion groups with workers, we asked the same two questions regarding reductions in CW-1 workers: 1. How did reaching the numerical cap of CW-1 permits in May for fiscal year 2016 and October for 2017 affect your job? 2. How do you think the end of the CW permit program, scheduled for December 31, 2019, will affect your job in the future? Participants in all employer and employee discussion groups were selected using a nonprobability sampling approach. Therefore, the views or experiences reported by individuals in these groups may not be generalizable. To evaluate federal and CNMI government efforts to address labor force challenges, we (1) collected information about the CNMI’s three job training programs supported by CW-1 vocational education fees, (2) assessed the results of the CNMI’s two scholarship programs, (3) examined employment programs offered by the CNMI’s Department of Labor, (4) reviewed a grant agreement between DOI and the CNMI government, and (5) summarized the results of the 902 consultative process (902 Consultations) that resulted in a report to Congress with several recommendations. To collect information about the CNMI’s three job training programs, we analyzed financial records showing CW-1 vocational education fees that DHS transferred to the CNMI government in fiscal years 2012 through 2016. We also reviewed the CNMI government’s financial records that showed allocations of CW-1 vocational education fees to three education entities in the CNMI: Northern Marianas College, Northern Marianas Trades Institute, and the CNMI’s Public School System. We interviewed administrators of these three entities and held discussions with groups of current or former participants of their job training programs. A total of 38 people participated in our three discussion groups, with groups ranging in size from 10 to 16 participants. Participants were asked, among other things, about the goals of the job training programs and their expectations regarding the impact of the programs on them. Participants in these groups represented nonprobability samples, and therefore, the views and experiences they reported may not be generalizable to all participants in CNMI job training programs. Last, we reviewed a previous GAO report, an independent audit of the Northern Marianas College for the period 2014 through 2015,and an independent audit of the CNMI’s Public School System in 2015, the most recent reports available. To assess the results of the CNMI’s two scholarship programs, we interviewed officials, gathered documentation, and analyzed data provided by the CNMI’s Scholarship Office and the Saipan Higher Education Financial Assistance Program, the two organizations that provide funding to students to attend higher education and job training programs. The Saipan Higher Education Financial Assistance Program is supported by the Municipality of Saipan, Office of the Mayor. We did not collect similar information about the scholarship programs offered by the municipalities of Rota or Tinian. To examine the employment programs offered by the CNMI’s Department of Labor under the Workforce Investment Act of 1998 (WIA) and the Workforce Innovation and Opportunity Act (WIOA), we interviewed the CNMI’s Secretary of Labor and case officers working at the CNMI’s Workforce Investment Agency and officials from DOL’s Employment and Training Administration. We also analyzed the CNMI’s WIA performance data for program years 2012 through 2015. We limited our scope to those program years because reporting requirements changed in program year 2016, which began on July 1, 2016, as a result of the enactment of WIOA. We also reviewed several GAO reports about WIA and WIOA and the CNMI’s 2016 WIOA state plan. To review federal grants agreement between the U.S. and CNMI governments, we interviewed officials and reviewed documents provided by DOI and the CNMI Department of Labor. To understand the results of the Consultations we reviewed section 902 of the U.S.–CNMI Covenant, and the report published as a result of the process. We also interviewed CNMI government and DOI and DHS officials who participated in the process. Data from the Marianas Visitors Authority show that the downward trend in the number of Japanese visitors to the Commonwealth of the Northern Mariana Islands (CNMI) from 2013 through 2016 was offset by the growth in visitors from China and South Korea. From fiscal years 2013 through 2016, the number of visitors from Japan dropped by 58 percent from 148,423 to 62,120 visitors. Meanwhile, the number of Chinese visitors rose by 83 percent (112,570 to 206,538), and the number of South Korean visitors rose by 48 percent (135,458 to 200,875), as shown in figure 12. According to the U.S. Department of Homeland Security (DHS), Chinese visitors are paroled into the CNMI under the department’s discretionary parole authority, while eligible South Korean and Japanese visitors enter the CNMI under the U.S. Visa Waiver Program. With the increases in total annual visitor arrivals, hotel occupancy in the CNMI has also risen in recent years. Data from the Hotel Association of the Northern Mariana Islands, which in 2016 represented 12 CNMI hotels, show that from 2011 through 2015, hotel occupancy rates for its member hotels increased from 64 to 87 percent; at the same time, the yearly average inflation adjusted room rate increased by 35 percent from $98 to $133 per night (see fig. 13). Table 9 shows the percentage of foreign workers and workers in the Commonwealth of the Northern Mariana Islands (CNMI) in 2014 with CNMI-Only transitional worker (CW-1) permits, by industry. Workers with approved CW-1 permits are a subset of foreign workers. According to the CNMI’s 2014 Prevailing Wage Study, which is based on a survey of employers, about 90 percent of foreign workers had CW-1 permits. The table is sorted with the highest percentage of foreign workers first. As the table shows, (1) agriculture, forestry, fishing, and hunting; (2) other services (except public administration); (3) construction; and (4) accommodation and food services industries employed 80 percent or more of non-U.S. citizens, or foreign workers. The public administration industry—where citizenship is sometimes a requirement—has the lowest percentage of non-U.S. citizens, with about 22 percent. Table 10 shows the numbers and percentages of foreign workers and workers with CW-1 permits by occupation in 2014. The table is sorted by the percentage of foreign workers in each occupation, from highest to lowest. As the table shows, production occupations, personal care and service occupations, and building and constructions occupations had percentages of foreign workers, almost 90 percent. On the other hand, only about 40 percent of protective service occupations were made up of foreign workers. As the minimum wage increases continue, they will affect a growing percentage of hourly workers in the Commonwealth of the Northern Mariana Islands. Table 11 shows the numbers and percentages of workers directly affected by the most recent and future scheduled minimum wage increases across different industries. Directly affected workers are CNMI’s wage workers in 2014, assuming they maintained employment, that would have been directly affected by the federally mandated 2016–2018 wage increases. The table is sorted by industries with the highest numbers of workers directly affected by the most recent minimum wage increase first. As the table shows, the tourism related industries, such as retail trade and accommodation and food services, are likely to be more directly affected by the most recent and future scheduled minimum wage increases. More than 6,000 (or approximately 77 percent) of workers in these two industries have been directly affected by the most recent minimum wage increase. The minimum wage increases are also likely to have a large impact on the construction industry; these increases have directly affected 79 percent of its workers. Public administration and educational services are less likely to be directly affected by the current and future minimum wage increases, with 9 and 14 percent of those industries’ workers directly affected, respectively. Similar to industries, certain occupations are also likely to be directly affected more by minimum wage increases than others. Table 12 shows the numbers and percentages of workers directly affected by the most recent and future scheduled minimum wage increases across different occupations. The table is sorted by occupations with the highest numbers of workers directly affected by the most recent minimum wage increase first. As the table shows, sales and related occupations and food preparation and serving related occupations are most likely to be directly affected by the most recent and future scheduled minimum wage increases. More than 4,000 workers (or approximately 85 percent) of workers in these two occupations are likely directly affected by the most recent minimum wage increase. Building and grounds cleaning and maintenance occupations, production occupations, and construction and extraction occupations also are likely to experience relatively large impacts from the most recent and future scheduled minimum wage increases. To simulate the effect of a reduction in the number of foreign workers on the Commonwealth of the Northern Mariana Islands (CNMI) economy, we followed a similar approach as in an earlier GAO report that also analyzed the economic effect of a reduction of foreign workers in the CNMI. Specifically, to model the CNMI economy, we employed a standard production function, the Cobb-Douglas, that we modified to allow for both foreign and domestic workers. Y = AK(&FW-p +(1-&)RW-p)-(1-)/p In this model, Y is output, K is capital, A is a constant, FW is the foreign workforce, and RW is the resident workforce. The parameters that control both the substitutability and the factor shares of resident and foreign labor are and & In addition, the elasticity of output with respect to “total workforce” is given by (1-). We chose this function because it allowed us to vary assumptions about the degree to which both foreign and domestic labor act as substitutes for each other. In this model, the less often the foreign workers are close substitutes for domestic workers, the greater the effect of any restriction on foreign workers—because domestic workers are less able to step into the roles occupied by foreign workers. Allowing the substitutability to be less than perfect is consistent with a recent paper suggesting that even when the education level and experience level of immigrant and nonimmigrant workers are identical, they may not be perfect substitutes. According to information from the CNMI’s Department of Commerce, foreign and domestic workers have similar levels of education. For example, about 20 percent of each group had a college education or greater in 2014. However, there are stark differences between the two groups in pay. The average wage for a CNMI-Only transitional worker (CW-1) visa holder in 2014 was $7.54, about 69 percent of the average wage of domestic workers, at $10.94. We chose parameters for the model by examining research in this area and available data. For example, we simulated the effect of greater or less “substitutability.” However, because of the extent of uncertainty, we purposely allowed the parameters to have large ranges. We assumed that capital and technology remained constant. See table 13 for the parameters and inputs used in our analysis. Determining the number of CW-1 workers required that we combine different data sets, which introduced a potential source of error. As a check on our results, we also ran simulations with different estimates of the CW-1 workers. We found that making different assumptions about the number of CW-1 workers affected our estimates but did not affect the overall message. We ran a simulation in which the number of CW-1 workers was assumed to be 9,715, based on the number of CW-1 permits approved by the U.S. Citizenship and Immigration Services in fiscal year 2015. We found that making this change caused the range of gross domestic product (GDP) reduction to change, from 26 to 62 percent under the assumption of 11,370 CW-1 workers to 22 to 45 percent under the assumption of 9,715 CW-1 workers. We addressed the possibility that the percentage of visa holders with CW-1 permits, which we based on the CNMI prevailing wage study, might be too low. Specifically, we ran a simulation in which we assumed that the 641 workers from the freely associated states (Federated States of Micronesia, Republic of the Marshall Islands, and Republic of Palau) in the prevailing wage study were incorrectly coded as non-CW-1 visa holders, instead of not requiring a visa at all. This possibility was noted in the methodology section of the prevailing wage study. Making this change increased the number of CW-1 workers to 11,816. This led to a range of simulation results of 28 percent to 68 percent GDP reduction. We used the limited data available to produce an estimate of the relationship between gross domestic product (GDP) in the Commonwealth of the Northern Mariana Islands (CNMI) and the number of workers in the CNMI workforce. We relied on data from two sources. For the number of workers, we used data from the CNMI tax system. For information on the size of the economy (GDP), we used the most recent data available from the U.S. Department of Commerce’s Bureau of Economic Analysis (BEA). BEA has reported data on GDP in CNMI from 2002 to 2015. From 2002 to 2015, the CNMI’s inflation adjusted GDP fell from $1.47 billion to $814 million, according to BEA, a decline of 45 percent. The decline in GDP reflects the departure of the garment industry. Over this same period, CNMI tax data show that the number of total workers fell from 50,436 to 25,307—a decline of about 50 percent—accounted for almost entirely by the reduction of foreign workers. Because of the lack of other relevant data, our analysis has important limitations. First, it attributes all of the change in GDP to changes in labor and assumes that nothing else in the CNMI changed over this period. It is possible, however, that capital in the island also diminished as labor left, since machinery may have been removed from the island and vacant factories may have been depreciated. Second, the change in GDP also could be attributed to the changes in the global economy, as well as other factors, which are not accounted for in the statistical model. Given these limitations, we employed a simple linear regression model, where the dependent variables were the natural log of GDP (presented in fig. 14) against the natural log of the number of workers (presented in table 12) and a constant. We used the natural log because that allowed us to interpret the coefficients as an elasticity. We estimated a coefficient of 0.83 that implies that for every 10 percent decrease in the number of workers, there is a decline in GDP of about 8 percent, on average, with the elasticity interpretation. We also ran a version of this regression with a linear time trend. Including a linear time trend yielded a similar estimated relationship between GDP and the number of workers. Figure 14 shows the relationship between these two variables as a scatter plot. In a separate analysis, however, we found that the relationship between GDP and the number of workers differed by industry, which further indicates the degree of uncertainty, and shows that the experience of the garment industry may not be generalizable. We determined this relationship was inconsistent across different industries using data on GDP from BEA by comparing GDP with the number of workers in each industry. However, because of a data limitation, we measured the number of workers not by the number of unique Social Security numbers but by the number of W-2 wage and tax statement forms filed by CNMI employers. A single person can have W-2 forms at multiple employers if he or she holds multiple positions. In addition, GDP by industry is available through 2014 rather than 2015. Because of the inconsistency between industries and the limitations described above, we applied a simulation method to project the effect of reducing the number of CNMI- Only transitional worker permits in the following appendix. Although the majority of foreign workers approved by the U.S. Department of Homeland Security (DHS) in the Commonwealth of the Northern Mariana Islands (CNMI) have CNMI-Only transitional worker (CW-1) permits, DHS also approves other types of permits for foreign workers in the CNMI (see table 14). The Commonwealth of the Northern Mariana Islands (CNMI) Bureau of Environmental and Coastal Quality compiles data from permit applications submitted by developers or their consultants to the CNMI government. These data sets include impacts on infrastructure and workforce needs, among other topics, and are typically incorporated in their Environmental Impact Assessments. The bureau only gathers projected workforce needs for developments—such as resorts or housing developments—that have submitted permit applications to the CNMI government or have made their proposed plans public. These proposed developments are in various stages of planning, permitting, construction, or operation. Table 15 includes all proposed CNMI projects to begin construction from 2015 to 2019 that have the potential to directly and significantly affect the CNMI’s coastal resources and therefore require zoning permits. Eight of the 22 projects—indicated by the table’s shaded rows—are for hotels, resorts, or casinos that together account for 97 percent (7,846) of the estimated total 8,124 employees needed for operation. For program years 2012 through 2015, the Workforce Investment Act (WIA) required grant recipients (1) to use performance measures that gauged outcomes for program participants in the areas of employment, employment retention, and earnings and (2) to negotiate performance levels for each performance measure with the U.S. Department of Labor (DOL). As shown in table 16, the Commonwealth of the Northern Mariana Islands (CNMI) met the negotiated performance levels for two of the nine performance measures in program year 2015. While DOL officials told us that recipients of WIA and Workforce Innovation and Opportunity Act (WIOA) grants are generally required to meet negotiated performance levels in order to receive their full funding allocations, officials said that they do not apply financial sanctions to the CNMI and other outlying areas. DOL officials noted that the CNMI did not always submit its quarterly and annual performance data on time. They also stated that the CNMI did not use DOL’s electronic performance reporting system, in part because of limitations in the CNMI’s physical infrastructure. DOL officials said that they have had concerns about the reliability of the performance data submitted by the CNMI, which they have addressed by offering technical assistance sessions at annual regional meetings and through e-mails and conference calls with CNMI Department of Labor officials. In addition, the DOL grant manager visited the CNMI in March 2015 to provide technical assistance on a variety of grant management and performance topics. In addition to the contacts named above, Emil Friberg (Assistant Director), Julia Ann Roberts (Analyst-in-Charge), Sada Aksartova, David Blanding, Benjamin Bolitzer, and Moon Parks made key contributions to this report. Caitlin Croake, David Dayton, Neil Doherty, Mary Moutsos, and Alexander Welsh provided technical assistance. American Samoa: Alternatives for Raising Minimum Wages to Keep Pace with the Cost of Living and Reach the Federal Level. GAO-17-83. Washington, D.C.: December 2, 2016. American Samoa and the Commonwealth of the Northern Mariana Islands: Economic Indicators Since Minimum Wage Increases Began. GAO-14-381. Washington, D.C.: March 31, 2014. Commonwealth of the Northern Mariana Islands: Additional DHS Actions Needed on Foreign Worker Permit Program. GAO-12-975. Washington, D.C.: September 27, 2012. Compacts of Free Association: Improvements Needed to Assess and Address Growing Migration. GAO-12-64. Washington, D.C.: November 14, 2011. Commonwealth of the Northern Mariana Islands: Status of Transition to Federal Immigration Law. GAO-11-805T. Washington, D.C.: July 14, 2011. American Samoa and Commonwealth of the Northern Mariana Islands: Employment, Earnings, and Status of Key Industries Since Minimum Wage Increases Began. GAO-11-427. Washington, D.C.: June 23, 2011. Commonwealth of the Northern Mariana Islands: DHS Should Conclude Negotiations and Finalize Regulations to Implement Federal Immigration Law. GAO-10-553. Washington, D.C.: May 7, 2010. American Samoa and Commonwealth of the Northern Mariana Islands: Wages, Employment, Employer Actions, Earnings, and Worker Views Since Minimum Wage Increases Began. GAO-10-333. Washington, D.C.: April 8, 2010. Commonwealth of the Northern Mariana Islands: Managing Potential Economic Impact of Applying U.S. Immigration Law Requires Coordinated Federal Decisions and Additional Data. GAO-08-791. Washington, D.C.: August 4, 2008. | A 2007 law required the minimum wage in the CNMI to rise incrementally to the federal level in a series of scheduled increases. GAO has been periodically required to report on the economic impact of the minimum wage increases in the territory. A 2008 law established federal control of CNMI immigration. It required the U.S. Department of Homeland Security (DHS) to create a transitional work permit program for foreign workers in the CNMI and to decrease the number of permits issued annually, and presently requires that DHS reduce them to zero by December 31, 2019. To implement this aspect of the law, in 2011, DHS created a CW-1 permit program for foreign workers. In addition to the above statutory provisions, GAO was asked to review the implementation of federal immigration laws in the CNMI. Accordingly, this report examines (1) changes in the CNMI's labor market since the start of the federally mandated minimum wage increases, (2) the potential economic impact of reducing the number of foreign workers to zero, and (3) federal and CNMI efforts to address labor force challenges. GAO reviewed U.S. laws and regulations; analyzed government data; and conducted fieldwork in Saipan, Tinian, and Rota, CNMI. During fieldwork, GAO conducted semistructured interviews and discussion groups with businesses, CW-1 workers, U.S. workers, and current and former job training participants. The Commonwealth of the Northern Mariana Islands' (CNMI) labor market has begun to grow after years of decline, while continuing to rely on foreign workers. By 2015, the number of employed CNMI workers was about 8 percent higher than in 2013, and inflation-adjusted average earnings had risen by 18 percent from 2007 levels. By 2016, about 62 percent of CNMI workers were directly affected by CNMI's minimum wage hike to $6.55 per hour. In 2015, foreign workers, who totaled 12,784, made up more than half of the CNMI workforce and filled 80 percent of all hospitality and construction jobs, according to GAO's analysis of CNMI tax data. If all workers with CNMI-Only transitional worker (CW-1) permits, or 45 percent of total workers in 2015, were removed from the CNMI's labor market, GAO projects a 26 to 62 percent reduction in CNMI's 2015 gross domestic product (GDP)—the most recent GDP available. Demand for foreign workers in the CNMI exceeded the available number of CW-1 permits in 2016—many approved for workers from China and workers in construction occupations. The construction of a new casino in Saipan is a key factor in this demand (see photos taken both before and during construction in 2016). Meanwhile, by 2019, plans for additional hotels, casinos, and other projects estimate needing thousands of new employees. When the CW-1 permit program ends in 2019, available data show that the unemployed domestic workforce, estimated at 2,386 in 2016, will be well below the CNMI's demand for labor. To meet this demand, CNMI employers may need to recruit U.S.-eligible workers from the U.S. states, U.S. territories, and the freely associated states (the Federated States of Micronesia, Republic of the Marshall Islands, and Republic of Palau). Federal and CNMI efforts to address labor force challenges include (1) job training programs offered by Northern Marianas College, Northern Marianas Trades Institute, and the CNMI's Public School System; (2) employment assistance funded by the U.S. Department of Labor and implemented by the CNMI's Department of Labor; and (3) technical assistance provided by the U.S. Department of the Interior. In 2016, a U.S.–CNMI consultative process resulted in a report to Congress with six recommendations, including one to raise the cap on CW-1 foreign worker permits and extend the permit program beyond 2019. GAO is not making recommendations. |
Prescription drug discount cards are a relatively new option for consumers. Most of the large PBM-administered programs have been operating for less than 5 years, although some cards, such as one administered by Express Scripts, have been available for about a decade. Pharmaceutical-manufacturer-sponsored discount cards are a more recent development; the first one began in fall 2001. Together Rx began operating in June 2002. PBM-administered drug discount card programs are generally offered to consumers through such organizations as retail stores, retail pharmacies, employee and other associations, nonprofit organizations, insurance companies, and PBMs. The sponsoring organization typically markets the program under its own name, but contracts with another organization— usually a PBM—to administer the program. Generally, the PBM creates a network of participating pharmacies that have contracts with the PBM specifying discount arrangements. The PBM processes orders for the cards and operates a mail order pharmacy that cardholders may use. Consumers can have as many different cards as they like. Each card can be used at any participating retail pharmacy or through the PBM’s mail order pharmacy. Retail pharmacies play an important role in drug discount card programs because they agree to offer a lower price to cardholders. The PBM administrators with whom we spoke estimated that retail pharmacies fill 75 to 95 percent of the prescriptions paid for using PBM-administered discount cards, with mail order filling the remaining prescriptions. A large majority of prescriptions paid for using pharmaceutical-manufacturer- sponsored cards are also filled by retail pharmacies, rather than through mail order. To the typical pharmacy, however, card users comprise a small share of their prescription business. Representatives of three retail pharmacy chains we contacted told us that from 2 to 10 percent of a pharmacy’s prescriptions are purchased using a card. Under the Administration’s proposed Medicare-Endorsed Prescription Drug Plan Assistance Initiative, established drug card sponsors could apply to CMS for a Medicare endorsement; if they get it, sponsors could advertise this endorsement. Before the injunction was issued, applications from card sponsors were due March 7, 2003, and a final decision on the initial cards that would be Medicare-endorsed was slated to be announced in May 2003. On this timetable, CMS said it expected that beneficiaries would be able to enroll in the card program of their choice beginning in September 2003. Cards receiving the endorsement would have to meet certain standards, which are described below. The CMS rule does not provide details on some of these standards and is silent on how the agency would ensure compliance with some of them. Beneficiary eligibility. A card program would have to be open to all Medicare beneficiaries. Each beneficiary could be enrolled in only one Medicare-endorsed card program at a time, but could withdraw from it at any time. (A database of all cardholders would be maintained to ensure that each beneficiary was enrolled in only one Medicare-endorsed card program.) After withdrawing from a card program, the beneficiary could enroll in another Medicare-endorsed card program, but that enrollment would not take effect until the first day of the following July or January, whichever came first. Fees. A card program could charge an enrollment fee of no more than $25 to each Medicare beneficiary. Coverage. Each card program would provide a discount for at least one brand name or generic prescription drug from each therapeutic class of drugs (specified in the final rule) commonly needed by Medicare beneficiaries. CMS said it anticipated periodically modifying the therapeutic classes to keep them up to date with Medicare beneficiaries’ use of drugs and with changes in the pharmaceutical marketplace, including newly approved drugs. Advertised discounts. The discount that a beneficiary would receive by purchasing drugs with a Medicare-endorsed prescription drug card must be advertised in dollars, not as a percentage. CMS said it anticipated working with beneficiaries and the pharmaceutical industry to create a means to compare prices for drugs among all Medicare-endorsed prescription drug cards. CMS stated that it would give a special designation to up to 10 percent of cards that offered the deepest discounts to beneficiaries. Negotiation of discounts. Medicare-endorsed cards would require card administrators to negotiate with pharmaceutical manufacturers to provide lower prices to retail pharmacies for drugs purchased by cardholders. Discount card administrators would have to ensure that a “substantial” share of the lower prices was passed on to beneficiaries, either indirectly, through retail pharmacies, or directly. Information for beneficiaries. Enrollment fees, the availability of patient management services, such as drug interaction warnings, and information about the generic equivalent of brand name drugs for each Medicare-endorsed card would be included on CMS’s Web site and in the documents that contain card price comparisons developed by CMS. PBM-administered drug discount cards differ from pharmaceutical- manufacturer-sponsored cards with respect to eligibility, the range of drugs they cover, the extent to which the retail pharmacy is paid for all or part of the difference between the price a person pays without a discount card and the discount card price for a particular drug, and the prices available with a card. The discount card programs administered by PBMs are available to any adult, while the pharmaceutical manufacturers’ cards are available only to Medicare-eligible individuals and couples with incomes below a certain level who do not have prescription drug coverage. Each PBM-administered card covers most outpatient prescription drugs, while the cards sponsored by pharmaceutical manufacturers generally provide discounts only on the outpatient prescription drugs that company produces. PBM-administered discount cards specify that the cardholder’s price will be the lower of a percentage below a commonly used reference price or the pharmacy’s usual price (generally referred to as the usual and customary price). The typical card sponsored by a pharmaceutical manufacturer offers cardholders either a price that is a specified percentage off a list price or a fixed price for a specified quantity of each covered drug. (See appendix I for selected characteristics of the drug card programs that we examined.) The eligibility requirements for a card generally depend on whether it is administered by a PBM or sponsored by a pharmaceutical manufacturer. Unlike the PBM-administered cards, which are available to any individual, the drug company-sponsored cards are available only to Medicare-eligible individuals and couples with no prescription drug coverage who earn less than a certain amount. Income eligibility limits for these cards range from $18,000 to $30,000 for an individual and from $24,000 to $40,000 for a couple. PBM-administered discount cards usually cover most brand name and generic drugs. PBM officials said exceptions could include high-cost drugs in limited supply, those needing special administration, and the relatively few outpatient prescription drugs covered by Medicare. Each of the cards sponsored by a pharmaceutical manufacturer typically covers all the outpatient prescription drugs that the manufacturer produces. The number of drugs covered by the four manufacturer-sponsored cards we reviewed ranges from 14 to 46. The Together Rx card offers discounts on about 150 brand name drugs manufactured by its participating pharmaceutical manufacturers. Under all drug discount card programs, retail pharmacies agree to accept a lower price from a cardholder than the usual price they would charge a noncardholder. The card programs vary, however, in whether and to what extent the pharmacies are paid for the difference between these two prices. For purchases with the Medco Health Solutions and WellPoint Health PBM-administered cards, there is no such payment. For some of the purchases made with the other three PBM-administered cards, the retail pharmacy is either paid a portion of the difference between the pharmacy’s usual price and the price the cardholder pays. For other purchases made with any of these three cards, the pharmacy is not paid for any of the difference between the usual price and the price the cardholder pays. Under the typical pharmaceutical manufacturer-sponsored card, the manufacturer pays retail pharmacies for a portion of the difference between the usual price it charges for a drug and the lower price the pharmacy agrees to charge a cardholder. Some manufacturers set limits on the usual price that will be used to determine this portion. While PBM-administered drug discount cards typically express their savings to cardholders as a percentage off what a cardholder would otherwise pay, the cards differ in how they calculate the price that cardholders pay at a retail pharmacy. For example, all the PBM- administered cards other than Citizens Health express the cardholder’s price as the lower of the average wholesale price minus 10 to 15 percent or the retail pharmacy’s usual price. Citizens Health and the AARP card administered by Express Scripts use similar formulas, but further stipulate that the cardholder’s price must be at least one dollar below the retail pharmacy’s usual price. Drug prices available with pharmaceutical manufacturer-sponsored cards are typically lower than the prices available with PBM-administered cards because a manufacturer-sponsored card’s price is either a percentage off the manufacturer’s list price to wholesalers, which is generally lower than average wholesale price, or a dollar amount for a specified amount of a drug. For example, Aventis cardholders pay no more than 15 percent below its list price to wholesalers for a covered drug, and a Pfizer Share Card enrollee pays $15 for each 30-day supply of any covered drug. With GlaxoSmithKline’s Orange card a cardholder pays a price that is the pharmacy’s usual price, subject to a limit determined by the manufacturer, minus 25 percent off the company’s list price to wholesalers. Each manufacturer participating in Together Rx sets the price for each of its drugs independently, while guaranteeing that the price will be at least 15 percent off the manufacturer’s list price to wholesalers. PBM-administered drug discount cards used at retail pharmacies or the PBMs’ mail order pharmacies generally offer savings to consumers because card prices are typically lower than the prices retail pharmacies would otherwise charge. Card savings—the difference between the pharmacy’s usual price and the cardholder’s price—vary, primarily because the usual price varied across the 40 pharmacies we surveyed. For certain drugs at certain pharmacies, however, no savings were achieved through the use of the card because the retail pharmacy’s usual price was lower than the median card price. Savings achieved through a PBM- administered card would be reduced by the annual or one-time fee that the card charges. The range of savings achieved using a PBM-administered drug discount card at a retail pharmacy for a 30-day supply of the nine drugs we examined varied within and across geographic areas, primarily because of differences in the usual prices charged by the pharmacies. Choice of pharmacy rather than choice of card had more effect on how much a person saved with a discount card. (See appendix II for more information on the median retail drug card prices and the median retail pharmacy prices in the three areas we examined.) Median savings available with a PBM-administered card in the Washington, D.C. pharmacies ranged from $2.09 to $20.95 for the nine drugs. All 14 of the surveyed pharmacies offered a 10 percent senior discount. Card savings amounted to an additional 1.7 percent to 43.9 percent off the median pharmacy price. The highest percentage discount was for the two generic drugs in our sample (atenolol and furosemide), although because these were the lowest priced drugs, the dollar savings were among the lowest in the sample. The substantial price differences across pharmacies affected the card savings for a given drug. For example, the noncard price for a 30-day supply of 200 milligrams of Celebrex at the surveyed Washington, D.C. pharmacies ranged from $74.33 to $95.59. Median savings in North Dakota ranged from $0.54 to $7.72 for the nine drugs or from 1.3 percent to 42.3 percent off the median pharmacy price. Only 3 of 13 pharmacies offered a senior discount (two offered 10 percent and one offered 5 percent). At one of the pharmacies offering a senior discount, some card prices for eight of the nine drugs were higher than the pharmacy’s usual price for those drugs. In California, Medi-Cal, the state’s Medicaid program, requires retail pharmacies that participate in the program to offer the Medi-Cal price to Medicare beneficiaries who do not have prescription drug coverage. At the 10 Medi-Cal-participating pharmacies, savings for seven of the nine drugs ranged from $0.44 to $13.06 or from 0.7 percent to 11.1 percent off the median pharmacy price. The Medi-Cal prices for the other two drugs at these pharmacies were lower than the median drug card prices for these drugs so the use of the card offered no savings. At the two pharmacies that did not participate in Medi-Cal, but offered a 10 percent senior discount, the savings were similar to those at the Medi-Cal participating pharmacies, although one pharmacy’s prices for four drugs were lower than the median card prices. Savings at the other pharmacy, which did not offer a senior discount or participate in Medi-Cal, were considerably higher. Mail order prices for a 30-day supply of a drug with a PBM-administered discount card were typically lower than the retail pharmacies’ usual price without a discount card, resulting in greater card-related savings. The mail order prices with a discount card resulted in savings ranging from $6.30 to $27.56 for eight of the nine drugs we examined at the Washington, D.C. pharmacies we surveyed. The average retail pharmacy usual price without a discount card for the other drug was lower than the mail order price with a card. In North Dakota, the savings realized by using a PBM- administered drug card to purchase the nine drugs from a mail order pharmacy ranged from $0.63 to $17.58. In California, mail order prices using a PBM-administered drug card were lower than the Medi-Cal price for eight of the nine drugs we examined, resulting in savings ranging from $1.03 to $19.67; the Medi-Cal price was lower than the mail order drug card prices for the other drug. Mail order savings at the three California pharmacies that were not participating in Medi-Cal ranged from $3.12 to $104.32, except at one of the pharmacies offering a 10 percent senior discount where the retail price for two drugs was lower than the mail order price. Because it generally offers lower prices than retail pharmacies, mail order can be an attractive option for purchasing drugs for the chronic conditions common among the elderly, such as diabetes, arthritis, and high blood pressure. Two PBM administrators noted, however, that many elderly people cannot afford to buy at one time the 90-day supply of a drug that mail order pharmacies typically dispense. Consumers who use a mail order option can purchase drugs at Internet pharmacies without a discount card. Our comparison of prices using data from November 2001 found that the median mail order price using a PBM- administered discount card was generally lower than Internet pharmacy prices for a drug. But we also found at least one Internet pharmacy at that time that offered a price lower than the median discount card mail order price for 8 of 17 drugs that we examined. The savings from using a card are reduced if the card charges a fee. None of the pharmaceutical manufacturers’ cards charges a fee. The PBMs whose cards we examined generally charged a one-time fee or an annual fee. For example, the discount card we examined from Wellpoint Health charges a one-time fee of $25 for an individual and about $50 for a family. The Citizens Health card costs $12 a year for an individual and $28 a year for a family. As of October 2002, 16 states had passed laws regulating one or more aspects of prescription drug discount card programs (see table 1). While the scope of each of the laws varies, the sponsors of several of the laws have characterized their purpose as consumer protection. Thirteen of the states require that a notice appear prominently on the card declaring that it does not represent insurance coverage. Eleven of the states require that the reporting of discounts offered by the cards not be misleading, deceptive, or fraudulent. New Hampshire’s law, for example, requires that the advertising for any discount card expressly state that the discount is available only at participating pharmacies. The law was enacted in May 2001 after some consumers complained about confusion in how and where discount cards could be used. The sponsor of the New Hampshire law told us that she heard from consumers in her state who said they would pay for a card over the telephone, only to later find that the nearest pharmacy honoring it was 50 to 100 miles away from their home. Twelve states require that the discounts be specifically authorized by separate contracts between the card administrator and each participating pharmacy or pharmacy chain. South Dakota’s law, which includes such a provision, was enacted following complaints from pharmacists that companies were selling cards that promised discounts at various pharmacies, but that the companies did not have agreements with all of those pharmacies to actually provide the discounts. The sponsor of the South Dakota law said some cardholders claimed that certain pharmacies that the card’s sponsor advertised as accepting the card did not do so. The sponsor of the law told us that it is intended to protect consumers and pharmacies from deceptive sales practices by drug discount card sponsors. Mississippi’s drug discount card law bars a program administrator, such as a PBM, from requiring pharmacies to accept a card as a condition of receiving a contract for the PBM’s other business, unless the administrator “pays a portion” of the cost of the discount given by the pharmacy. According to a representative of the Mississippi Attorney General’s office, which is responsible for enforcing the law, the state has not defined “portion” in regulation and the meaning of the term has not been the subject of litigation. We provided a draft of this report for review to the five PBM administrators whose cards we examined, four of whom responded. We also obtained comments from a pharmaceutical manufacturer that sponsors its own card and participates in the Together Rx card, and one independent expert reviewer. They provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce this report’s contents earlier, we plan no further distribution until 30 days after its issue date. At that time, we will send copies to the Administrator of CMS, the PBMs that administered the cards we examined, the pharmaceutical manufacturers that sponsored cards we examined and other interested parties. We will also make copies available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please call me at (202) 512-7119 or John Hansen at (202) 512-7105. Major contributors to this report were Roseanne Price, Michael Rose, and Jeff Schmerling. Washington, D.C. | While prescription drugs have become an increasingly important part of health care for the elderly, more than one-quarter of all Medicare beneficiaries have no prescription drug coverage. Over the past decade, private companies and not-for-profit organizations have sponsored prescription drug discount cards that offer discounts from the prices the elderly would otherwise have to pay for their prescriptions. These cards are typically administered by pharmacy benefit managers (PBM). Pharmaceutical manufacturers also sponsor and administer their own discount cards. The Administration has been interested in endorsing specific drug cards for Medicare beneficiaries to make the discounts more widely available. Legislative proposals in the Senate and House of Representatives have included drug cards as a means to lower prescription drug prices for Medicare beneficiaries. GAO was asked to examine how existing drug discount cards work and the prices available to card holders. Specifically, GAO evaluated the extent to which PBM-administered drug discount cards offer savings off non-card prices at 40 pharmacies in California, North Dakota, and Washington, D.C., and the differences between PBM-administered cards and cards sponsored by pharmaceutical manufacturers. Medicare beneficiaries can receive prices with prescription drug discount cards at retail pharmacies that are generally lower than those available to seniors without cards. Prices available for a particular drug tend to be similar across PBM-administered cards. Savings from PBM-administered cards, however, can differ because retail pharmacy prices vary widely. For example, in Washington, D.C., which had the highest median retail pharmacy prices of the three areas GAO surveyed, median savings using a PBM-administered card ranged from $2.09 to $20.95 for a 30-day supply of the nine drugs frequently prescribed for the elderly that GAO examined. This was after accounting for the 10 percent discount for senior citizens given by each of the 14 surveyed pharmacies. Savings in California with the use of a card tended to be lower because 10 of the 13 California pharmacies GAO surveyed participated in the state's Medicaid program (Medi-Cal) and are required to give Medicare beneficiaries the Medi-Cal price. For seven of the nine drugs, savings ranged from $0.44 to $13.06. For the other two drugs the cards offered no savings at Medi-Cal-participating pharmacies because the Medi-Cal prices were lower than the median price available with a PBM-administered card. Savings in North Dakota for the nine drugs ranged from $0.54 to $7.72 even though 10 of the 13 pharmacies there did not offer a senior discount. Any savings achieved with a card are reduced by the annual or one-time fee charged by the PBM-administered cards. Prices available with a pharmaceutical-manufacturer-sponsored card for a particular drug are typically lower than prices obtained using PBM-administered cards, and are often a flat price of $10 or $15. PBM-administered cards differ from pharmaceutical-manufacturer-sponsored cards with respect to eligibility and the range of drugs they cover, as well as the price available with the card. PBM-administered discount cards are available to all adults and can be used to purchase most outpatient prescriptions. Pharmaceutical-manufacturer-sponsored cards are available only to Medicare beneficiaries with incomes below a certain level who have no prescription drug coverage and can be used to purchase only outpatient prescription drugs produced by the sponsoring manufacturers. |
First enacted in 1975, IDEA entitles children with disabilities to a free appropriate public education designed to meet the unique needs of each child. To be eligible for IDEA funding, states are required to provide special education in the least restrictive environment; meaning that, to the maximum extent appropriate, these children are to be educated with other children who do not have disabilities. However, to meet the diverse needs of children with disabilities, states must ensure that school districts provide a continuum of alternative placements, including regular classrooms, special classrooms, and special schools. The removal of children with disabilities from the regular classroom can occur only when the nature or severity of the child’s disability is such that education in regular classes with the use of supplementary aids and services cannot be achieved satisfactorily. IDEA requires that the services provided to each individual student be determined through an individualized education program (IEP) that describes the child’s present levels of academic achievement, goals for progress, and the special education and related services needed to attain those goals. The IEP is developed by a team of teachers, parents, school district representatives, and other educational professionals. This team must meet to develop the initial IEP within 30 days of determining that a child needs special education and related services, and it must continue to meet at least once a year to review the IEP to determine if goals are being met and to make any necessary changes. IDEA also provides for procedural safeguards, including that the parents of a child with a disability have the right to inspect and review educational records with respect to the identification, evaluation and educational placement of the child, and to obtain an independent educational evaluation at public expense if the parent disagrees with an evaluation obtained by the school district. IDEA and related regulations provide methods for resolving complaints between parents and school districts, including mediation, due process hearings, and state complaint procedures. IDEA is administered at the federal level by Education’s Office of Special Education Programs in the Office of Special Education and Rehabilitative Services. Part B of IDEA authorizes funding for federal grants to states to enable school districts to provide services for students with disabilities aged 3 through 21. IDEA Part B grants to states are distributed among states using a “base grant”—the amount received by the state for fiscal year 1999—and any remaining funds are distributed based on states’ child population and poverty rates. States distribute the funds to school districts similarly, starting with a base grant and then using a formula based on school enrollment and poverty. According to Education data, in fiscal year 2015, Congress appropriated approximately $11.5 billion under IDEA Part B grants to states, serving nearly 6.6 million children with disabilities (see table 1). In fiscal year 2009, the Recovery Act appropriated federal funding for IDEA, Part B that was more than double the 2008 amount. Since then, federal IDEA, Part B appropriations have been relatively constant. Education data indicate that the federal share of special education spending has generally declined since 2005, holding steady at about 16 percent since 2010, based on the national average per pupil expenditure. This indicates that the state and local share of special education spending has done the opposite: generally increasing since 2005 and holding steady at about 84 percent since 2010. State and local governments are responsible for funding most of the costs of special education and other K-12 programs, relying primarily on state income and sales tax, as well as local residential and commercial property taxes. As a result of the 2008 national recession, however, state and local revenues fell, resulting in cuts to education and other areas of spending. Research has shown that state funding for elementary and secondary education has been slow to recover from the 2008 recession and that long-term budget challenges are likely to persist. As shown in table 1, Education data also indicate that there has been a decline in the number of students with disabilities being served since 2005. Some researchers have suggested this decline may be attributed, in part, to greater emphasis on intervention services that reduce the need for special education among children who struggle but may not need special education with the proper supports. IDEA funds help cover the costs of educating children with disabilities but cannot be used to take the place of state and local funding allocated to special education programs. IDEA’s local MOE requirement generally prohibits districts from reducing their expenditures on special education and related services below the level of the previous year. Education provides districts with various methods for calculating their MOE amount: They can use only local funding or both state and local funding and can base their calculation on either the total or per-pupil amount. Also, a district may be able to reduce its expenditures and still meet the MOE requirement if it qualifies for certain allowable exceptions or the funding adjustment specified in IDEA law and regulations (see fig. 1). IDEA also contains a state MOE requirement for state funding of special education. The standard for state compliance with MOE requires that states maintain the same level of financial support provided (made available) for special education and related services from year to year, regardless of the amount actually expended. Education may waive the states’ MOE requirement for exceptional or uncontrollable circumstances, but there is no comparable provision allowing Education or a state to waive the districts’ local MOE requirement. Other federal funding streams also have MOE requirements, including several Elementary and Secondary Education Act programs. However, in contrast to the IDEA local MOE requirement, which is set at 100 percent of prior year’s spending, other education programs—including the Title I, Part A, Education for the Disadvantaged program (the largest federal funding stream for K-12 education)—have a local MOE requirement set at 90 percent of prior year’s spending for the amounts that school districts must provide in a given fiscal year from state and local sources (see table 2). Based on our previous work on federal grant design, as well as more recent work on MOE provisions under the Recovery Act, we have found MOE requirements to be important mechanisms for helping to ensure that federal spending achieves its intended effect. However, we have also found that without sufficient flexibility, these requirements can reportedly have adverse effects on state and local governments by distorting state and local priorities, penalizing spending reductions arising from fiscal crisis or increased efficiencies, and discouraging program innovation and expansion. In light of these concerns, in a previous report we concluded that federal MOE requirements should be sufficiently flexible to help mitigate some of the potentially adverse effects of the requirement on state and local governments and asked the Congress to consider enacting a standard maintenance of effort requirement across federal programs to help ease confusion among and potential adverse effects on recipients of federal funds. States reported that their school districts generally met the MOE requirement without using the allowable exceptions or funding adjustment but that some faced a variety of challenges in doing so. The key challenges in meeting MOE that districts cited involved state and local budget or cost reductions, which are not among the allowable exceptions for districts to reduce spending. No national level data exist on the extent to which districts are meeting MOE, but according to the responses to our state survey, nearly all school districts met MOE in the 2012-13 school year (the most recent data available in all states at the time of our survey). States reported less than 1 percent of districts nationwide failed to meet MOE in 2012-13, and all these districts were located in 14 states (see fig. 2). In addition, states indicated that the shortfalls for all districts identified as failing to meet MOE in 2012-13 amounted to a total of about $877,000 nationwide as of May 2015. However, this number is understated because 2 of the 14 states with districts failing to meet MOE were unable to report the amount of their districts’ shortfalls at the time. State responses to our survey also indicate that most districts met MOE in 2012-13 without using allowable exceptions or the funding adjustment. Forty states responded to our survey that half or more of their districts met MOE without using either of these provisions (see fig. 3). District responses to our 2015 follow-up survey of those districts that had anticipated trouble meeting MOE in 2011-12 largely mirrored the states’ survey responses on meeting MOE. Of the 87 districts that responded to this survey, only 7 reported that they had failed to meet MOE in school years 2011-12, 2012-13, or 2013-14. Of the 68 districts that reported meeting MOE all 3 years, 16 did so using exceptions or the funding adjustment in at least one of those years. “Our district has staff retire nearly every year, which makes the exception for voluntary staff departure the easiest exception for us to claim.” “Even when we were eligible to use the funding adjustment, the allowable spending decrease would have been negligible. We decided it was not worth the expenses and amount of time it would have taken us to claim it.” The large number of districts meeting MOE without the use of exceptions or the funding adjustment may be partially explained by rising special education costs. According to some state and district officials we interviewed, rising costs have made it easier for districts to meet MOE in the last few years. In addition, some district officials said that documenting their eligibility to the state for use of exceptions is burdensome, which may lead some districts to avoid using them. Those that did use them, however, relied on some exceptions more than others (see fig. 4). Regarding the funding adjustment, many districts have not used it in the last few years because they were not eligible to do so. Although states reported on our survey that most of their districts met MOE in 2012-13, almost all states indicated that some districts faced challenges, and the number of states reporting that half or more of their districts have faced or may face challenges is increasing (see fig. 5). Similarly, of the districts that anticipated trouble meeting MOE in 2011-12, about half of those responding to our follow-up survey (44 of 87) said that they ultimately did face challenges meeting MOE in 2011-12, 2012-13, or 2013-14. In our characteristics analysis comparing districts that had and had not anticipated having trouble meeting MOE in 2011-12, we identified small, but statistically significant differences with respect to declining enrollment and the extent to which districts were rural. This analysis was based on the nationally representative sample of districts for our 2011 survey and is therefore generalizable to districts nationwide in that year. (For a more detailed discussion of this analysis and the results, see app. IV.) Among districts that reported facing challenges in meeting MOE, officials described a number of reasons for those challenges that were not covered by allowable exceptions. As reflected in responses to our district survey, illustrated in figure 6 below, and our discussions with selected districts, often the key reasons that districts faced challenges or failed to meet MOE involved circumstances that decreased special education spending from one year to the next but were not covered by the allowable exceptions and, in some cases, were outside the districts’ control. Specifically, in response to our survey and in interviews at selected sites, district officials identified budget or revenue reductions and various circumstances related to cost reductions—such as local actions to implement efficiencies—as key challenges in meeting MOE. Districts surveyed most frequently cited reductions in state funding of K- 12 education and reductions in the state contribution to funding for special education as a factor in not meeting MOE (see fig. 6). The MOE requirement does not include an exception for such challenges and is, in fact, designed to protect special education funding in such circumstances. Officials in one Texas district we interviewed told us their state funding had been cut for the 2011-12 school year, in spite of an increase in the district’s student population. In response to reduced funding, the district made cuts to special education as well as general education spending, and as a result it had come close to not meeting MOE for the 2011-12 school year. A Virginia district we interviewed met MOE in 2011-12 and 2012-13, but officials said it has been difficult to maintain special education expenditures because the county cut its funding in response to declines in local revenue. District officials reported three different types of circumstances that can result in reduced costs that are not allowable exceptions to MOE: (1) local actions to increase efficiencies, (2) state policy changes to staff salary or benefits, and (3) a gradual decline in enrollment over multiple years. Increased efficiencies. District officials we surveyed and interviewed described various challenges stemming from efforts to implement efficiencies, even when these changes had no effect on service delivery. In response to our survey, 17 districts reported challenges due to local actions to increase efficiencies in the provision of direct services, and 14 districts reported challenges due to efficiencies in administrative functions (see fig. 6). For example, a New Jersey district official commented on our survey that his district failed to meet MOE after reorganizing to share the cost of a special education director with another district. Similarly, officials we interviewed in one Michigan district told us that they negotiated a staff pay cut of 10 percent due to a budget deficit. As a result, they were paying 10 percent less for the same services. The district met MOE only because Michigan elects to test its MOE compliance in aggregate with other districts, according to local officials who oversee that district and others. Another Michigan district we interviewed said that when they had difficulty hiring a staff psychologist they had to contract for psychologist services, which turned out to be less costly than what the district spent on those services previously, causing challenges in meeting MOE. A Texas district official told us that when their state funding was cut, they reduced costs by increasing the number of classes taught by middle and high school teachers, allowing them to cut 237 staff positions without eliminating any courses, programs, or special education services. Despite continuing to deliver the same services, because it was less costly, they came close to not meeting MOE. State salary or benefit changes. Survey respondents from 15 districts cited state policy changes to teacher or staff salary or benefits as a challenge (see fig. 6). All districts we interviewed in Michigan told us they faced challenges meeting MOE when the state legislature passed a law capping the amount that public employers, such as school districts, could contribute toward employee health benefits. This effectively reduced the amount districts spent on special education teachers and other staff, the majority of any district’s special education spending. One Michigan district we visited was very small and had only one special education teacher. As a result of the new state cap, officials said that their teacher decided to opt out of the district’s health insurance plan for the 2013-14 school year, which decreased their special education spending by about $10,000. The district did not meet MOE on its own for 2013-14, but it did meet it in aggregate with other districts (the level the state tests for compliance), according to local officials. Gradual enrollment decline. Officials we interviewed in several selected districts said meeting MOE could be a challenge if a district experienced a gradual decline in enrollment over multiple years that eventually resulted in needing fewer special education teachers on staff. In the three states we visited, officials interpreted the decreasing enrollment exception as applying only to year-to-year decreases. One local official in Michigan who oversees several school districts said that they have experienced gradual reductions in special education caseloads as a result of population decline that do not always necessitate immediate staff reductions. He explained that if districts let one staff member go at the end of each year, it would be easier to meet MOE using either the per capita calculation or the exception for a decrease in enrollment (see fig. 1 for a description of MOE calculations and exceptions). However, districts often wait 3 or 4 years until they reach a crisis point and have to lay off two or more special education staff, which makes meeting MOE more difficult. Officials in one of the districts he oversees explained that they would not eliminate staff positions due to the loss of only two or three students, but over the course of 5 years, a gradual decline could justify a decrease in staff. State and district officials had mixed views on MOE’s effects on services for students with and without disabilities. MOE is one of multiple safeguards established under IDEA to protect special education funding, and while some officials reported positive effects, others said the requirement can sometimes have the unintended consequence of deterring districts from innovating and implementing efficiencies in special education services. Additionally, some states and districts pointed out that prioritizing special education spending to meet MOE during a period of budget constraints resulted in cuts to general education spending that affected services for all students, including the many students with disabilities that spend much of their days in general education classrooms. “The IDEA MOE requirement has helped to protect our schools from state general fund budget reductions.” “If the MOE requirement didn’t exist … based on my own opinion I believe services for children with disabilities would be reduced. While accountants and even other personnel may not like that it exists, it does protect children with disabilities.” such as dispute resolution. District officials often told us that they viewed MOE as a secondary consideration and not a factor in determining the level of services planned for special education students. Instead, they said they fund special education services based on the IEP process and are required by law to provide the services outlined in those plans regardless of MOE. However, a disability advocate we interviewed noted that the amount of services districts prescribe in the IEP is often determined by the funding available, and MOE helps to prevent districts from decreasing those funds. “MOE hinders our ability to offer innovative methods for delivery of services, if the cost of the new, innovative method is less than in the previous year.” “The MOE requirement also fosters a lack of innovation in the program [special education] for fear of adding to the spending base.” At the same time, some state and district officials we interviewed said MOE can discourage efforts to implement innovations or expand services. For example, some district officials we spoke with said that because of MOE, they do not want to commit to a higher level of spending to implement innovative services, despite other provisions in IDEA that are intended to encourage innovation. An official in one Texas district said that although their special education director recommended expanding their integrated athletics program for children with disabilities, they chose not to because they did not want to commit to the increased costs in an environment of ongoing budget uncertainty. Similarly, a Michigan district official said that program innovations, such as introducing new technology or new co-teaching methods, can be costly to implement and cannot be piloted and discontinued if unsuccessful without decreasing spending and jeopardizing the district’s ability to meet MOE. This concern is consistent with our past work that concluded that federal MOE requirements without sufficient flexibility can discourage program expansion and innovation. Moreover, as noted earlier, implementing efficiencies can create challenges in meeting MOE. As a result, some district officials said that MOE can discourage efforts to implement efficiencies that could help reduce costs and can lead to unnecessary spending to comply with the requirement. For example, one Wisconsin district official commenting on Education’s 2013 NPRM said that because of state legislative changes that required reductions in their contributions to teacher benefits, they had to find other ways to spend money on special education to meet MOE regardless of whether the expenditures were needed. Further, a Virginia state education official we interviewed said that Virginia districts feel penalized for complying with IDEA’s directive to serve more students with disabilities in general education classrooms since this more inclusive model can be less costly than placing all these students in special education classrooms; yet the MOE requirement is not flexible enough to allow for this without putting districts at risk of failing to meet MOE. At the same time, a disability advocate we interviewed noted that if districts do have cost savings in special education, districts should reinvest those savings back into special education. Similarly, district officials reported mixed views about whether MOE had positive or negative effects on students in our 2015 follow-up survey of districts that had indicated in 2011 that they anticipated trouble meeting MOE in 2011-12. More districts said MOE had a positive effect on services for students with disabilities than for students without disabilities. But, even for students with disabilities, the majority of districts said it had a negative effect or no effect on services for these students (see fig. 7). Several district officials noted that protecting special education funding does not necessarily equate to protecting or improving special education services. For example, a Minnesota district official said the 100 percent MOE requirement may discourage districts from striving to make students with disabilities as independent as possible if such actions would reduce special education spending. He was concerned that not enough attention was being given in the IEP process to encouraging greater independence and inclusion and that the process was being driven by maintaining expenses rather than responding to the evolving needs of students. Also, a Texas district official commented that a district could still meet MOE if it were to give a 3-percent raise to special education educators while reducing costs related to services and programs by an equal amount. When districts experience reductions in state and local funding and are forced to make cuts, they generally must prioritize special education to meet the MOE requirement, which can result in cuts to general education services. The 100 percent requirement is significantly stricter than the 90 percent MOE requirement established by law for other K-12 education programs and provides districts less latitude to adjust spending to minimize negative effects on services. Moreover, while state education funding may be beginning to recover from the recession, many districts have experienced severe budget challenges in the last 4 years. For example, according to Education data for fiscal year 2011, 33 states had decreases in their state and local K-12 per-pupil revenue, and for 20 of these states, the decreases ranged between 2 and 13 percent. In our 2015 district follow-up survey, about half of those responding (44 of 87) reported reducing general education spending at least once during school years 2011-12 through 2013-14. Of those reporting reductions, about half (21) said MOE was one of several reasons for the cut, and one district reported that MOE was the primary reason in at least one of these school years (see fig. 8). In addition, one Virginia district official we interviewed noted that if a district is penalized for not meeting MOE, the penalties hurt students by further reducing already constrained district resources. As figure 9 shows, cuts to general education spending can affect services in a variety of ways. Responses from the 22 districts that attributed general education service cuts, at least in part, to MOE, indicated that these cuts often led to reductions in teachers and support staff, classroom materials, and technology. Some district officials we interviewed noted that such cuts can result in increased classroom sizes, decreased class offerings, and reduced extracurricular activities. General education service reductions negatively affect all students—both those with and without disabilities. But this may be especially true for the growing number of special education students being served in general education classrooms—another unintended consequence of MOE. Education recently reported that the percentage of special education students spending at least 80 percent of their school day in general education classrooms increased from 33 percent in 1990–91 to 61 percent in 2012–13. One Virginia district official we interviewed said students with disabilities served in general education classrooms were particularly affected by increased classroom sizes resulting from reductions in general education spending. Education’s delayed monitoring feedback and evolving policies over the past decade (see fig. 10) have hindered states’ efforts to facilitate compliance with the MOE requirement, according to state officials. States and school districts cited the need for additional technical assistance, information sharing, and training to help them meet—not just understand—this complex requirement. Although Education carried out its fiscal monitoring reviews of states’ compliance from 2010 through 2012, it has yet to issue feedback letters to nearly half the states—keeping these states waiting at least 3 years for their monitoring results. Education began its latest round of IDEA programmatic and fiscal monitoring in 2010, initiating the new monitoring cycle with verification visits to 16 states. Then in 2011, Education removed the fiscal component from its visits and merged the findings from that component with the fiscal monitoring it was conducting for the Recovery Act. From that point on, the merged fiscal monitoring effort was conducted through desk audits rather than site visits, and as a result, Education officials told us that they were able to complete the bulk of these reviews of all 50 states by the end of 2012. In 2010, when Education began its latest round of monitoring, it had a performance standard to provide feedback to states within 88 days of a verification visit. However, Education did not set a performance standard for the merged fiscal monitoring reviews. We found that although the bulk of these reviews were completed by the end of 2012, as of August 2015, 22 states were still waiting for Education to provide results letters telling them whether their fiscal monitoring systems comply with federal requirements. The delay of 3 years or more in providing written feedback to states significantly exceeds the timeframes Education had considered reasonable for its previous verification visits and is inconsistent with federal government standards that call for the findings of audits and other reviews to be promptly resolved. States reported that Education’s delayed feedback has kept them from taking corrective actions in a timely way. For example, an Oregon state official we interviewed as part of our survey follow-up told us Education conducted its fiscal monitoring of the state in 2010 but did not provide feedback requiring corrective action until the fall of 2014. The state official told us the state is in the process of taking corrective actions in response to Education’s findings but said it could have taken such steps earlier to better facilitate district compliance had Education provided more timely feedback. Similarly, state officials in Delaware told us they experienced delays of more than a year while in discussions with Education on the state’s proposed changes to bring the state’s MOE calculation methodology into compliance. The state put its monitoring of MOE on hold until Education approves its proposed changes because those changes will require updates to the state’s information technology system which will take time and resources. In the meantime, state officials said they are unable to determine whether districts are meeting the MOE requirement. Education officials told us competing priorities and staffing issues contributed to the delays in providing states feedback. For example, they said they were implementing a new accountability system and finalizing revised regulations for MOE that Education issued in April 2015. They also said they were working to eliminate a backlog of independent audit reviews that contained findings pertaining to IDEA programs. Education officials also said that, while dealing with these competing priorities, multiple staff involved in the monitoring process left Education and that it took time to replace them. At the time of our review, Education officials said they would like to release the findings for the remaining states by fall of 2015. However, some fiscal monitoring letters were undergoing departmental review—one of the final steps before release—while others were still in the drafting stage. Also, because so much time had elapsed since its monitoring reviews, Education was contacting some states again to ensure it had up- to-date information about those states’ monitoring systems and to confirm that the findings from its reviews were still relevant. As of August 2015— the most recent data Education provided—22 states were still waiting for feedback. Education officials told us they are planning to begin piloting their next cycle of IDEA monitoring during fiscal year 2016. They said that the new monitoring system will be a risk-based system, and, while they expect to review all states to determine which states require monitoring, they have not established a schedule for completing reviews of all states within a specified period of time, nor for providing feedback to states. Education officials told us they did not yet have a written plan, including timelines and performance measures, for implementing the new monitoring process. Prior to Education’s April 2015 final rule revising its MOE regulations, states said they experienced confusion and uncertainty about Education’s policies, making it difficult for them to help districts comply with the MOE requirement. They identified two areas, in particular, as having caused the most confusion and frustration: (1) the existence of two MOE standards and (2) the level of spending required after failing to meet MOE. Standards for eligibility and compliance. In two states, the state officials we interviewed indicated uncertainty about the need to meet MOE based on two different standards: eligibility (based on a district’s budgeted amounts) and compliance (based on a district’s actual expenditures). Apparently other states had also been confused about this. Despite Education’s policy letters that attempted to clarify the existence of two standards, in the preamble to its 2013 NPRM, Education acknowledged that some states had still not understood that two different standards were in place based on the wording of the 2006 regulations. To address this issue, in its April 2015 final rule, Education made revisions to clearly label and explain the differences between the two standards. Officials in one state also told us that implementing the eligibility standard would require them to modify their data systems. “Subsequent years” rule. Some states and special education stakeholders had generally understood MOE to require a district to maintain the level of spending from the previous year, even if they had failed to meet MOE in that year (referred to as the “subsequent years” rule). The 2006 regulations did not specifically address this issue, but in a 2011 policy letter, Education confirmed this general understanding. In April 2012, however, Education reversed this policy and instead said that a district’s required level of spending after failing to meet MOE is equal to the amount it should have expended in the prior year had it met MOE. Congress included similar language in two separate appropriations acts, and Education’s April 2015 final rule included a provision codifying this interpretation and examples of how states should apply this rule. Officials in two states told us this change in the guidance frustrated their efforts to monitor districts’ compliance with MOE, and in one state, required them to change their tracking systems. In its 2013 NPRM, Education acknowledged that the MOE requirement is complex and that a significant lack of understanding of the requirement had persisted. To help states navigate this complexity and to promote a better understanding of MOE, the April 2015 final rule includes several tables detailing how to comply with the requirement. In addition, Education officials said they have provided a webinar, presentations at Education’s IDEA leadership conference, and a written question and answer document to help explain the revised regulations. Education has provided various types of technical assistance regarding compliance with MOE, as required under IDEA, but more may be needed. While Education has provided states with assistance to help them understand the MOE requirement, responses to our surveys indicate that states and school districts could benefit from additional technical assistance, information sharing with their peers, and training to help districts meet—not just understand—the MOE requirement. Prior to 2014, Education funded multiple regional resource centers that provided webinars and established communities of practice focused on issues including MOE. In our survey, many states commented that they had relied on these centers for support related to the MOE requirement. In 2014, Education moved away from a regional assistance model to a more centralized approach and established two new national technical assistance centers, the Center for IDEA Fiscal Reporting (CIFR) and the National Center for Systemic Improvement (NCSI). Education officials told us that NCSI will provide some general fiscal technical assistance to states, but that any detailed technical assistance on the MOE requirement will come from CIFR. CIFR is charged with providing technical assistance to states on collecting and reporting special education fiscal data. It plans to work collaboratively with states and other federally funded technical assistance centers to (1) improve the capacity of states to collect and report accurate fiscal data and (2) increase states’ knowledge of the underlying fiscal requirements and the calculations necessary to submit valid and reliable data, according to its website. However, it is too early to assess how effective CIFR will be in achieving these goals. Since opening its doors in November 2014, CIFR has launched its website, conducted introductory webinars targeted to states, and established a listserv that 45 of 60 states and entities have joined. It is collecting data from the states to develop a database about each state’s fiscal reporting that will be used by CIFR to plan its technical assistance activities and is holding regional meetings and communities of practice for states to exchange information on various fiscal and programmatic issues, including those related to the MOE requirement. Education officials told us they anticipate that CIFR will be able to provide technical assistance and facilitate information sharing among states related to the MOE. The results of our surveys and interviews indicate that states value Education’s technical assistance (see fig. 11), and several states added that they would like Education to provide additional technical assistance, training, tools, and opportunities to share information across states. While Education is charged with providing technical assistance to the states, the states, in turn, are charged with providing assistance and support to their districts. In response to our state survey, states reported providing technical assistance, training, and tools to school districts to assist them in complying with the MOE requirement. Among the school districts we surveyed, well over the majority indicated these resources were useful. However, in our interviews and in our 2015 follow-up survey, school districts reported that they would like additional training to help them comply with the MOE requirement. For example, some districts officials we interviewed specifically stressed the need to have training for special education directors, finance or business managers, and superintendents because each plays a role in decisions affecting MOE compliance. Some districts we surveyed commented that they would like assistance in managing and tracking their MOE status throughout the year. One district specifically noted that they wanted to be more pro-active in ensuring compliance, while others wanted more transparency in how their state calculates MOE. In Virginia, though districts submit data to the state annually, the state’s system allows districts to enter their expenditures throughout the year to track MOE. Officials in one district we interviewed said this was extremely helpful, allowing them to track their compliance with MOE on an ongoing basis. “We experienced cuts in state funding for special education that dropped us below the MOE required for state and local expenditures combined, but our local funding for special education increased during the same period. We think we could have met MOE if we had been able to use one of the local only calculations.” states said that they do not routinely use all four calculations to determine MOE, in some cases because the data on state and local funds that districts use for MOE calculations are pooled together. This means that districts in their states do not have ready access to the data needed for the two MOE calculations based on local expenditures separately. For example, an Arizona state education agency official we interviewed said the state does not maintain separate records for state and local expenditures; therefore, the state would have to redesign its system in order to use the local-only calculation. Further, a Tennessee official commented that it would be helpful if Education could assist in identifying ways to track local funds when state and local funds are separated. More information sharing across states could be helpful, as well: In Texas, where the state system also does not separate state and local expenditures, the state educational agency had recently developed a way for districts to impute their local only expenditures using a newly created state tool, which could potentially benefit other states. “We would like to be able to claim as an exception those teachers who leave the special education program but remain in the district in other positions. It is more cost-effective for us to retain trained special education teachers than to let them go because a student who needed that expertise is no longer served by the district.” Use of exceptions. Similarly, the ability to use exceptions could also be important to districts facing challenges in meeting MOE, because these provisions may enable districts to reduce their expenditures and still meet the MOE requirement. However, we found that some state officials had questions about how to implement these provisions. For instance, a Rhode Island state official commented to Education’s 2013 NPRM that more guidance for determining the dollar threshold for an “exceptionally costly” program is needed. Another state official from Colorado asked for additional guidance on the exception for “termination of costly expenditures for long-term purchases.” Also, an official from Louisiana who responded to our state survey said examples of allowable exceptions would be helpful. The lack of clarity of how exceptions should be applied was further evidenced in the states we visited, where we found states used different criteria for applying certain exceptions (see table 3). In its 2013 NPRM, Education acknowledged that some states were not applying the exceptions correctly or were not applying them at all. Across the federal government, MOE requirements are important mechanisms for helping to ensure that federal spending achieves its purpose. The local MOE requirement under IDEA is intended to safeguard local financial support for educating the over 6 million children in the United States who require special education services. Meeting MOE generally is not a problem for districts when state and local economies and tax revenues are strong and when districts experience increases in the numbers of students with disabilities. But when state and local economies falter as they did during the 2008 recession, or when districts experience declines in numbers of students with disabilities, as has been the trend recently, meeting the local MOE requirement can become a challenge. Most states reported that at least some districts faced challenges in meeting the requirement, despite exceptions intended to help in such situations. Current exceptions do not address the key challenges that districts face, including factors that are outside of their control and that do not affect the level of services provided to students with disabilities. In these situations, it is unclear whether funds spent on special education to comply with MOE result in enhanced services for students with disabilities. Further, the MOE requirement’s lack of flexibility can lead to unintended consequences that affect services for students with disabilities. IDEA’s 100 percent MOE requirement is stricter than the 90 percent MOE requirements mandated for other K-12 education programs. Our previous work has shown that such rigidity can discourage program expansion and innovation, and we found examples of this within the IDEA program. We also found that such rigidity resulted in reductions in general education services that benefit all students—including students with disabilities, a large and increasing number of whom are served for much of their day in general education classrooms. A less rigid MOE requirement would allow districts more latitude to adjust their spending at the margins—focused on providing the best services to address the most pressing needs of students with disabilities—while mitigating the effects of unintended consequences. Education’s lack of timely monitoring feedback has hampered some states’ efforts to facilitate school district compliance with MOE—a key requirement of the law. Although Education completed its fiscal monitoring reviews in 2012, 3 years later nearly half the states are still waiting for feedback on their monitoring results. In addition, some districts may be failing to meet MOE because limitations to their state financial systems do not allow them to use all four MOE calculations, as provided for in Education’s regulations. Because these various ways of calculating MOE are intended to provide districts with the ability to calculate MOE based on local circumstances, the ability to create workarounds to use these calculations could be the difference between a district meeting MOE or not. Providing more technical assistance and facilitating more information-sharing among states and districts could help them navigate the complexities of the local MOE requirement and avoid the detrimental effects of noncompliance. To help districts address key challenges in meeting MOE and mitigate unintended consequences that may affect services for students with disabilities, while preserving the safeguard for funding for students with disabilities, Congress should consider options for a more flexible MOE requirement. This could include adopting a less stringent MOE requirement to align with the MOE requirements in other education programs or adding to or modifying exceptions. For example, current exceptions could be changed to allow one-time increases in spending without changing a district’s MOE baseline in order to encourage pilot innovations or to allow certain spending decreases (e.g., state caps on teacher benefits), as long as a district can demonstrate the decrease does not negatively affect services. To strengthen states’ monitoring and facilitate local MOE compliance, the Secretary of Education should establish and document set timeframes for providing prompt feedback to states on findings from its next cycle of IDEA fiscal monitoring; and prioritize technical assistance and information sharing across states on ways to facilitate local MOE compliance with respect to the use of the four calculation methods and the exceptions. We provided a draft copy of this report to the Department of Education for review and comment. Education’s comments are reproduced in appendix VII. Education also provided technical comments, which we incorporated into our report where appropriate. (In addition, we provided officials from the state educational agencies we reviewed with portions of the draft report that included information specific to their states. We incorporated their technical comments where appropriate.) In its comments, Education agreed with both of our recommendations. Regarding our recommendation to establish timeframes for providing prompt feedback to states on findings from its next cycle of IDEA fiscal monitoring, Education stated that in its new system of monitoring it will include timelines for providing prompt feedback on monitoring results, including findings and corrective actions. Regarding our recommendation to prioritize technical assistance and information sharing across states on ways to facilitate local MOE compliance with respect to the use of the four calculations methods and the exceptions, Education stated it is currently working on a set of questions and answers that will place particular emphasis on the allowable exceptions, as well as calculation issues. Education also commented on the importance of the MOE requirement as a safeguard designed to protect funding for students with disabilities. While acknowledging the challenges that meeting the local MOE requirement presents during difficult economic times, Education noted that the requirement also provides a crucial protection that helps ensure students with disabilities continue to receive a free appropriate public education. We agree; as stated in our report, we believe the requirement is an important mechanism intended to safeguard local financial support for educating the over 6 million children in the United States who require special education services. Nevertheless, we believe there are opportunities to reduce the rigidity of the requirement while continuing to preserve MOE as a safeguard of funding for students with disabilities. We are sending copies of this report to the appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff should have any questions about this report, please contact me at (617) 788-0580 or nowickij@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. To obtain information on efforts to meet the local maintenance of effort (MOE) requirement under the Individuals with Disabilities Education Act (IDEA), we designed and administered a web-based survey of state special education directors in all 50 states and the District of Columbia. The survey included questions about the extent to which school districts (referred to by law and in the survey as local educational agencies—LEA) in the state met the local MOE requirement in the 2012-13 school year, state perspectives on challenges their school districts face in meeting the MOE requirement, procedures used by state educational agencies (SEA) for monitoring compliance with the requirement, and the state and federal role in assisting school districts in complying with the requirement. The survey was in the field from January to March 2015. We received responses from the District of Columbia and all states, except Hawaii (which we determined was outside our scope), for a response rate of 100 percent. To obtain the maximum number of responses to our survey, we sent three reminder emails to non-respondents and contacted the remaining non-respondents over the telephone. We took steps to minimize non-sampling errors, including pretesting draft instruments and using a web-based administration system. During survey development, we pretested the draft instrument with five state special education directors from October through November 2014. In the pretests, we were interested in the clarity of the questions and the flow and layout of the surveys. Based on feedback from the pretests, we made minimal revisions to the survey instrument. As an additional step to minimize non-sampling errors, we used a web-based survey. By allowing respondents to enter their responses directly into an electronic instrument, this method automatically created a record for each respondent in a data file and eliminated the errors associated with a manual data entry process. We also checked the accuracy of our work by independently verifying programs used to analyze the survey data and make estimations. Lastly, after we closed the survey, we contacted respondents in some states to conduct follow-up interviews in order to clarify their responses and gather further information. The questions we asked in our survey of state special education directors are shown below. Our survey was comprised of closed- and open-ended questions. In this appendix, we include all the survey questions and aggregate results of responses to the closed-ended questions; we do not provide information on responses provided to the open-ended questions. All survey respondents did not always respond to each individual survey question; therefore, the total responses for each question do not always add up to the number of total survey respondents. 1. Has funding from state sources for K-12 education for each of the years listed below decreased, remained about the same, or increased compared with the previous school year? Increased by 10% or more 2. Has funding from state sources for special education for each of the years listed below decreased, remained about the same, or increased compared with the previous school year? Increased by 10% or more 3. What was the total number of local educational agencies (LEAs) in your state that were subject to the LEA maintenance of effort (MOE) requirement for the 2012-2013 school year? No, stayed the same 10. In your state, does your state agency (or the LEAs) routinely calculate the LEAs' MOE using each of the following calculations? Don’t know Total a. Total local expenditures b. Total local and state expenditures c. Per capita local expenditures d. Per capita local and state expenditures 11. What comments do you have about why LEAs in your state might not use certain allowable calculations? 12. Does your state routinely monitor MOE compliance in the following ways for all, a subgroup, or none of the LEAs in your state? Resource not provided/ Don’t know 16. What other resources (not directly from the U.S. Department of Education), if any, has your state relied on for monitoring or assisting LEAs with MOE compliance? 17. What additional resources, if any, from the U.S. Department of Education would your state find useful for monitoring or assisting LEAs with MOE compliance? 18. Do you have any other suggestions for potential changes that could be made at the federal level related to the LEA MOE requirement under IDEA? To obtain information on efforts to meet the local MOE requirement under IDEA, we conducted a web-based follow-up survey of school districts (referred to by law and in the survey as local educational agencies—LEA) that had indicated previously that they anticipated having trouble meeting the MOE requirement in future years. Specifically, we sent a follow-up survey to the superintendents of the 103 school districts that had indicated in a 2011 GAO survey on the use of Recovery Act funds that they anticipated they would have trouble meeting the MOE requirement in the 2011-12 school year. Our 2015 follow-up survey included questions about whether school districts met the MOE requirement in 2011-12, 2012-13, and 2013-14, as well as their perspectives on challenges, effects on services, and the roles of SEAs and Education in assisting school districts in complying with the requirement. The survey was conducted from January through March 2015. We received responses from 87 school districts, for a response rate of 84 percent. To obtain the maximum number of responses to our survey, we sent three reminder emails to non-respondents and contacted the remaining non-respondents by telephone. As with our state survey, we took steps to minimize non-sampling errors, including pretesting the draft instrument and using a web-based administration system. During survey development, we pretested draft instrument with officials in four school district in November 2014. In the pretests, we were interested in the clarity of the questions and the flow and layout of the survey. Based on feedback from the pretests, we made minimal revisions to the survey instrument. As an additional step to minimize non-sampling errors, we used a web-based survey. By allowing respondents to enter their responses directly into an electronic instrument, this method automatically created a record for each respondent in a data file and eliminated the errors associated with a manual data entry process. We also checked the accuracy of our work by independently verifying programs used to analyze the survey data and make estimations. The questions we asked in our survey of school districts are shown below. Our survey was comprised of closed- and open-ended questions. In this appendix, we include all the survey questions and aggregate results of responses to the closed-ended questions; we do not provide information on responses provided to the open-ended questions. All survey respondents did not always respond to each individual survey question and, in some cases, the survey asked respondents to skip a question based on their response to the prior question; therefore, the total responses for each question do not always add up to the number of total survey respondents. 1. In our 2011 survey, your LEA indicated it thought it may have trouble meeting the MOE requirement in the future. Was your LEA able to meet the MOE requirement in the following school years? Don’t know - b. Local actions to increase efficiencies in administrative functions c. Local actions to increase efficiencies in the provision of direct services to children with disabilities d. Reductions in the state contribution to your district's local funding of special education e. Reductions in state funding of general K-12 education f. Decline in local revenue g. LEA does not have authority to raise its own revenue If you indicated that your LEA did not meet MOE in ALL of the years in question 1, please skip to question 6. Otherwise, continue to question 3. 3. For the school years when your LEA met MOE, was it a challenge or not a challenge to meet the MOE requirement? Don’t know - b. Local actions to increase efficiencies in administrative functions c. Local actions to increase efficiencies in the provision of direct services to children with disabilities d. Reductions in the state contribution to your district's local funding of special education e. Reductions in state funding of general K-12 education f. Decline in local revenue g. LEA does not have authority to raise its own revenue 5. Did the reasons that your LEA experienced challenges meeting the MOE requirement vary by school year? No, stayed the same 6. Do you have any other comments on factors that may have affected your Open-ended LEA's ability to meet the MOE requirement? 7. Did your LEA use an exception or funding adjustment provided by law to help meet the MOE requirement in any of the following school years? Total responses 84 84 84 7ABC. In the school year, did your LEA use any of the following exceptions or a funding adjustment? a. Voluntary departure, by retirement or otherwise, or departure for just cause, of special education or related services personnel 2011-2012 2012-2013 2013-2014 b. Decrease in enrollment of children with disabilities 2011-2012 2012-2013 2013-2014 c. Termination of an obligation to provide an exceptionally costly program of special education to a particular child 2011-2012 2012-2013 2013-2014 d. Termination of costly expenditures for long-term purchases (e.g., acquisition of equipment or construction of school facilities) 2011-2012 2012-2013 2013-2014 e. Assumption of cost by the high cost fund operated by the SEA under 34 C.F.R. § 300.704(c) 2011-2012 2012-2013 2013-2014 f. Funding adjustment to reduce local MOE expenditures by up to 50 percent of the increase in the LEA's subgrant allocation over that of the previous year 2011-2012 2012-2013 2013-2014 8. For any of the following school years, did your LEA reduce general education spending? Total responses 37 30 27 11. In general, has the IDEA MOE requirement (i.e., prohibiting the reduction of local spending on special education) had a positive effect, no effect, or negative effect on services overall for students with and without disabilities? Don’t know 7 12. Do you have any additional comments about why or how the IDEA MOE requirement affected services for students with and without disabilities? Resource not provide/ Don't know 8 18. What additional resources, if any, from the SEA would you find useful in assisting your LEA to comply with the MOE requirement? 19. Has your LEA accessed any resources directly from the U.S. Department of Education in its efforts to comply with the MOE requirement? Don’t know 17 19A. What U.S. Department of Education resource(s) has your LEA used and how useful have these resources been in assisting your LEA in complying with the MOE requirement? 20. Do you have any suggestions for potential changes that could be made at the federal level related to the MOE requirement under IDEA? To determine the characteristics of districts meeting, not meeting, and facing challenges meeting MOE, we analyzed data from the Department of Education’s Common Core of Data (CCD). This data set is comprised of fiscal and non-fiscal data collected annually about all public schools, public school districts, and state educational agencies in the United States. The CCD data elements and years we used for each part of our analysis are summarized in table 5. To assess the reliability of the CCD data elements used for our analysis, we reviewed existing documentation about the data system from the National Center for Education Statistics and conducted electronic testing. In a few cases where a district’s variable values were illogical, we changed the values or set them to missing for purposes of our analysis. We linked the CCD data to three different sources of information on district experiences with MOE. The methodology and results of each of these analyses are described below. 1. GAO’s 2011 survey of school districts. To examine the characteristics of districts facing challenges and not facing challenges meeting MOE, we linked the CCD data to responses to the question on GAO’s 2011 survey of school districts that asked, “Do you currently anticipate your LEA having trouble meeting the IDEA Maintenance of Effort (MOE) requirement for 2011-12?” This survey was sent to a nationally generalizable sample of school districts, which means that the results of our analysis are generalizable to the total population of school districts in 2011. For our characteristic analysis of 2011 survey respondents, we primarily used CCD data from the 2010-11 school year, which described district characteristics in the year of the survey. See table 6 below for the findings of this analysis. 2. GAO’s 2015 follow-up survey of a subset of school districts. To examine the characteristics of a subset of districts both meeting and not meeting, as well as facing challenges and not facing challenges meeting MOE, we linked the CCD data to responses to our 2015 follow-up survey of those school districts that anticipated having trouble meeting MOE for 2011-12. For our characteristics analysis of our follow-up survey respondents, we primarily used the most recent CCD data available at the time. In comparing the districts responding to our survey that they met MOE with those responding that they did not, we found some differences in the average characteristics between these two groups but did not report on these differences because the small size of the not-meeting group (7 districts) made it unlikely that these differences were meaningful. In comparing the districts responding to our survey that meeting MOE had been a challenge with those responding that it had not been a challenge, the largest differences were in total enrollment, change in total enrollment and number of students with IEPs from the prior year, and change in local revenue from the prior year. However, the results of this analysis reflect only the characteristics of those districts that responded to the follow-up survey and are not generalizable to the total population of school districts in either 2011 or 2015. 3. Five states’ MOE data for all their districts. To examine the characteristics of districts meeting and not meeting MOE in the five states that provided us with detailed MOE data on all their districts statewide (Alabama, Arizona, Michigan, Texas, and Virginia), we linked the CCD data to information provided to us by these five states on their school districts’ MOE status for school years 2011-12 and 2012-13. For our characteristics analysis of the districts in our five selected states, we primarily used CCD data from the years corresponding to their MOE data. To assess the reliability of state MOE data, we interviewed state officials and reviewed the data for logical inconsistencies. In two cases, states submitted revised MOE data based on our follow-up questions. Although we could not verify that all state MOE data were completely accurate, we determined that these data were sufficiently reliable for our purposes, which were to examine the general extent of districts meeting and not meeting MOE and their use of the exceptions and funding adjustment. The results of this analysis reflect only the characteristics of the districts in these five states and are not generalizable to the total population of school districts. In addition, because so few districts did not meet MOE in these states, we could not identify meaningful differences between districts that did and did not meet, and patterns in the characteristics of districts not meeting MOE were inconsistent. When a school district fails to meet MOE, the state is liable to return to the Department of Education—using non-federal funds—an amount equal to the district’s shortfall amount or its IDEA grant, whichever is lower. According to Education officials, states return funds to Education’s Office of the Chief Financial Officer. These funds are identified by the grant number, but there is no way to identify monies returned due to noncompliance with MOE, specifically. Officials in the Office of Special Education Programs said they were working with the Office of the Chief Financial Officer to better track funds returned for this reason. We followed up with the 14 states reporting in our survey that some of their school districts failed to meet the local MOE requirement in the 2012-13 school year and requested information on the amount of the shortfall for those districts that year. Based on the states’ responses, we estimate that, as of May 2015, the shortfall nationwide in the 2012-13 school year amounted to at least $877,000 total. However, this amount is understated because 2 of the 14 states were unable to provide the amounts of their districts’ shortfalls at the time. Of the 14 states reporting district shortfalls, 11 confirmed they had returned at least a portion of the funds to Education, with 10 of the 11 confirming that they had recouped at least a portion of the shortfall amounts from the districts. At the time we did our work, repayment of funds was still pending in the remaining 3 states. Among the five states for which we analyzed MOE data, three had at least one district that did not meet MOE in 2011-12 or 2012-13. According to the data provided by these states, the shortfalls for their districts were as follows: In Virginia, while all districts met MOE in 2011-12, one district did not meet MOE in 2012-13, and its shortfall, after accounting for allowable exceptions, was about $320,000, which represented less than 1 percent of the district’s expenditures of state and local funds for special education in that year. Virginia officials reported that the state had returned this amount to Education. In Arizona, six districts did not meet MOE in 2011-12, and five did not meet in 2012-13. The shortfall amounts, after accounting for allowable exceptions, ranged from less than $100 to about $220,000. The average shortfall in these districts was about 6 percent of the districts’ state and local expenditures in 2011-12, and about 4 percent in 2012- 13. At the time Arizona reported these data, the state noted it had returned some but not all of these shortfall amounts to Education. In Texas, officials reported that one district did not meet MOE in 2011- 12, and four did not meet MOE in 2012-13. In three of these five districts, the state reported that auditors had not found an actual shortfall. State officials later clarified that in the district originally found out of compliance with MOE for 2011-12, they had not sustained this finding. For the two districts for which the state reported shortfalls, the amounts were about $25,000 and about $40,000. Jacqueline M. Nowicki, (617) 788-0580 or nowickij@gao.gov. In addition to the contact named above, Margie K. Shields (Assistant Director), Cady S. Panetta (Analyst-In-Charge), Sandra Baxter, Justin Dunleavy, Lauren Gilbertson, and Nina Thomas-Diggs made key contributions to this report. Also contributing to this report were James Bennett, Deborah Bland, Caitlin Croake, Holly Dye, Ying Long, Jean McSween, Chris Morehouse, Karen O’Conor, Jonathon Oldmixon, James Rebbe, and Jessica Tollman. | IDEA provides federal support to school districts through grants to states for the excess cost of educating students with disabilities. Education is responsible for monitoring states' oversight of district compliance with IDEA, including an MOE requirement to ensure special education spending generally is at least equal to the level spent the preceding year. A 2011 GAO report found an estimated 24 percent of districts anticipated trouble meeting MOE. GAO was asked to examine districts' recent experiences with MOE. This report examines: (1) the extent to which districts face challenges meeting MOE and why, (2) how MOE affects services for students with and without disabilities, and (3) how well Education and states facilitate school districts' compliance with MOE. GAO surveyed the states, as well as districts that in 2011 anticipated trouble meeting MOE; analyzed MOE data; and interviewed Education officials, disability advocates, and state and district officials in three states selected to illustrate a range of experiences with MOE. States reported that nearly all school districts generally met the local maintenance of effort (MOE) spending requirement for special education, but some districts faced challenges for various reasons. Under the Individuals with Disabilities Education Act (IDEA), MOE requires districts to spend at least the same amount on special education services for students with disabilities that they spent in the preceding year, with some exceptions. In response to GAO's 50-state survey, states reported that nearly all districts met MOE based on the most recent data available in all states (school year 2012-13). However, most states reported that at least some of their districts faced challenges in doing so. In a separate GAO survey of districts, many cited budget and cost reductions—such as state or local revenue declines and new state caps on benefits, which lowered the cost of a special education teacher—as key challenges in meeting MOE. State and district officials had mixed views on MOE's effects on services for students with and without disabilities. MOE is one of several safeguards meant to protect special education funding, and while some officials reported positive effects, others said the requirement can sometimes create unintended consequences for the services provided to special education students. They said that because the MOE requirement lacks flexibility, it can discourage districts from altering their baseline of special education spending, even when doing so would benefit students with disabilities or result in more efficient delivery of the same services. For example, despite other grant provisions in IDEA that promote innovation, some district officials commented that the MOE requirement can serve as a disincentive to districts' efforts to pilot innovative or expanded services requiring a temporary increase in funds because it would commit them to higher spending going forward. In addition, some district officials noted that prioritizing special education spending to meet MOE resulted in cuts to general education spending that affected services for all students, including the many students with disabilities who spend much of their days in general education classrooms. The Department of Education's (Education) delayed monitoring feedback has hampered states' efforts to facilitate district compliance with MOE. In 2010, Education initiated its latest round of reviews of states' processes for overseeing their districts' compliance with IDEA, including MOE. However, Education currently has no standards for providing timely feedback on this process and—as of August 2015—had not provided feedback from these reviews to about half the states, due in part to competing priorities. Such delays are contrary to federal standards that call for prompt resolution of findings. Officials in one state said Education's untimely feedback had delayed the state's ability to provide guidance to districts regarding MOE, and in another state, monitoring was on hold until Education approved the state's process for determining MOE compliance. To promote innovation and efficiency while safeguarding special education funding, GAO suggests that Congress consider options for a more flexible local MOE, such as adopting a less stringent requirement. GAO also recommends, among other things, that Education take steps to establish specific time frames for providing prompt feedback to states about their fiscal monitoring of districts. Education agreed with GAO's recommendations. |
CAT/BPSS, which is part of TSA’s Passenger Screening Program, has undergone initial testing and is in the operational testing and evaluation phase of acquisition, according to TSA. The goal of CAT/BPSS is to deploy a computerized system that will read and analyze data and embedded security features on every passenger’s identification and some boarding passes, and to identify fraudulent credentials and boarding passes. In 2011, TSA conducted qualification testing of this system at its System Integration Facility at Washington Reagan National Airport, including testing the systems against more than 530 genuine and fraudulent documents, such as state-issued driver’s licenses, passports, and military identification cards, according to TSA. The technology is designed to automatically compare a passenger’s identification with a set of embedded security features to seek to identify indicators of fraud and concurrently ensure that the information on the identification and boarding pass matches. This system is intended to help ensure that identity credentials and boarding passes presented at the checkpoint have not been tampered with or fraudulently produced, and that the information on the boarding pass matches that of the identity credential. According to TSA, CAT/BPSS is to compare identity credentials with an internal database of more than 2,400 templates for various types of credentials and to check for certain embedded security features, then alert the operator of any discrepancies. In September 2011, TSA awarded contracts for approximately $3.2 million, which included the purchase of 30 units from three different vendors. In April 2012, TSA began deploying units to three airports— George Bush Intercontinental in Houston, Luis Muñoz Marín International in San Juan, and Washington Dulles International—in preparation for initial operational testing. TSA officials said that those airports were selected, in part, because of their high passenger volume and experience with detecting fraudulent documents. In preparation for initial testing, TSA tested the performance of its current process for comparison purposes. TSA is also training personnel on the CAT/BPSS systems, collecting preliminary data on system performance and availability, and assessing the adequacy of the concept of operations and standard operating procedures. According to TSA officials, these efforts will allow travel document checkers at the three airports to test the three systems in an operational environment and provide feedback on the systems’ performance. During operational testing, TSA plans to assess the systems’ performance against key performance parameters for detection, passenger throughput, and availability. Once operational testing is complete, TSA plans to produce a system evaluation report and recommend whether to move forward with the acquisition or make modifications. Vendors that successfully exit the operational testing phase will be eligible to compete for a contract to produce 1,400 units, according to TSA. According to the life cycle cost estimate for the Passenger Screening Program, of which CAT/BPSS is a part, the estimated 20-year life cycle cost of CAT/BPSS is approximately $130 million based on a procurement of 4,000 units. As highlighted in our Cost Estimating and Assessment Guide, a reliable cost estimate has four characteristics—it is comprehensive, well documented, accurate, and credible. We reviewed TSA’s November 2011 life cycle cost estimate for the Passenger Screening Program and compared it with the four characteristics. Based on our assessment, the life cycle cost estimate is reasonably comprehensive and well documented. Regarding accuracy, the cost estimate assumes a 1 percent inflation rate from fiscal years 2015 through 2029, as compared with the historic inflation rates calculated for fiscal years 2009 through 2014, which ranged from 3.3 to 4.5 percent. If a larger inflation rate were used, costs would be much higher than what are currently estimated. In addition, we cannot make a determination as to the credibility of the life cycle cost estimate as it does not include a risk and uncertainty analysis or an independent cost estimate. The risk assessment would quantify risks and identify effects of changing key cost driver assumptions and factors. In the cost estimate, TSA indicates that it is pursuing the acquisition of risk analysis capability and plans on having such capabilities in time for the next life cycle cost estimate. Likewise, there is no evidence that an independent cost estimate was conducted by a group outside the acquiring organization to determine whether other estimating methods would produce similar results. TSA officials indicated that the agency is updating its life cycle cost estimate to include a risk and uncertainty analysis and independent cost estimate, but the document has not yet been approved. The agency plans to expand the CAT/BPSS deployment schedule following successful implementation and testing in the selected airport environments. As of June 2012, TSA officials estimated that this could occur as soon as the end of this calendar year, depending on the results of the operational testing and evaluation phase. Our past work has identified three key challenges related to TSA’s efforts to acquire and deploy technologies to address homeland security needs: (1) developing and meeting technology program requirements, (2) overseeing and conducting testing of new screening technologies, and (3) developing acquisition program baselines to establish initial cost, schedule, and performance parameters. We have previously reported that DHS and TSA have faced challenges in developing and meeting program requirements when acquiring screening technologies, and that program performance cannot be accurately assessed without valid baseline requirements established at the program start. In June 2010, for example, we reported that more than half of the 15 DHS programs we reviewed awarded contracts to initiate acquisition activities without component or department approval of documents essential to planning acquisitions, setting operational requirements, or establishing acquisition program baselines. We made a number of recommendations to help address issues related to these procurements. DHS generally agreed with these recommendations and, to varying degrees, has begun taking actions to address them. We currently have ongoing work related to this area and we plan to report the results later this fall.fully follow DHS acquisition policies when acquiring advanced imaging At the program level, in May 2012, we reported that TSA did not technology (AIT), or body scanners, which resulted in DHS approving full AIT deployment without full knowledge of TSA’s revised specifications. As a result, we found that TSA procured and deployed a technology that met evolving requirements, but not the initial requirements included in its key acquisition requirements document that the agency initially determined were necessary to enhance the aviation system. We recommended that TSA develop a road map that outlines vendors’ progress in meeting all key performance parameters. DHS agreed with our recommendation and has begun taking action to address it. We have also reported on DHS and TSA challenges in overseeing and testing new screening technologies, which can lead to costly redesign and rework at a later date. Addressing such problems before moving to the acquisition phase can help agencies better manage costs. For example, in October 2009, we reported that TSA had deployed explosives trace portals, a technology for detecting traces of explosives on passengers at airport checkpoints, in January 2006 even though TSA officials were aware that tests conducted during 2004 and 2005 on earlier models of the portals suggested the portals did not demonstrate reliable performance in an airport environment. In June 2006, TSA halted deployment of the explosives trace portals because of performance problems and high installation costs. In our 2009 report, we recommended that, to the extent feasible, TSA ensure that tests are completed before deploying new checkpoint screening technologies to airports. DHS concurred with the recommendation and has taken action to address it, such as requiring more-recent technologies to complete both laboratory and operational tests prior to deployment. GAO, Aviation Security: DHS and TSA Have Researched, Developed, and Begun Deploying Passenger Checkpoint Screening Technologies, but Continue to Face Challenges, GAO-10-128 (Washington, D.C.: Oct. 7, 2009). cost and schedule can provide useful indicators of the health of acquisition programs. For example, we reported in April 2012 that TSA has not had a DHS-approved acquisition program baseline since the inception of the Electronic Baggage Screening Program (EBSP) more than 8 years ago. Further, DHS did not require TSA to complete an acquisition program baseline until November 2008. According to TSA officials, they have twice submitted an acquisition program baseline to DHS for approval—first in November 2009 and again in February 2011. An approved baseline would provide DHS with additional assurances that TSA’s approach is appropriate and that the capabilities being pursued are worth the expected costs. In November 2011, because TSA did not have a fully developed life cycle cost estimate as part of its acquisition program baseline for the EBSP, DHS instructed TSA to revise the life cycle cost estimates as well as its procurement and deployment schedules to reflect budget constraints. DHS officials told us that they could not approve the acquisition program baseline as written because TSA’s estimates were significantly over budget. TSA officials stated that TSA is currently working with DHS to amend the draft program baseline and plans to resubmit the revised acquisition program baseline before the next Acquisition Review Board meeting, which is planned for July or August 2012. Establishing and approving a program baseline, as DHS and TSA plan to do for the EBSP, could help DHS assess the program’s progress in meeting its goals and achieve better program outcomes. Our prior work on TSA acquisition management identified oversight problems that have led to cost increases, delivery delays, and other operational challenges for certain assets, such as EBSP, but TSA has also taken several steps to improve its acquisition management. For example, while we continue to find that some TSA acquisition programs do not have key documents needed for properly managing acquisitions, CAT/BPSS has a DHS-approved mission needs statement, operational requirements document, and acquisition program baseline. This hearing provides an opportunity for congressional stakeholders to focus a dialogue on how to continue a sufficient level of oversight of the CAT/BPSS acquisition and implementation and other key components of the Passenger Screening Program. For example, relevant questions that could be raised include the following: To what extent, if any, have key performance parameters changed during the course of the acquisition, and how will these changes affect security and efficiency at the checkpoint? What would be TSA’s strategy if vendors have difficulty meeting the key performance parameters? How will TSA ensure that implementation of the system addresses the security vulnerabilities previously identified? What confidence does TSA have in its cost estimates and how is the agency mitigating the risk of cost escalation or schedule delays? In managing limited resources to mitigate a potentially unlimited range of security threats, how does CAT/BPSS fit into TSA’s broader aviation security strategy? What cost-benefit and related analyses, if any, are being used to guide TSA decision makers? These types of questions and related issues warrant ongoing consideration by TSA management and continued oversight by congressional stakeholders. Chairman Rogers, Ranking Member Jackson Lee, and Members of the Committee, this concludes my prepared statement. I look forward to responding to any questions that you may have. For questions about this statement, please contact Steve Lord at (202) 512-4379 or lords@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Jessica Lucas-Judy, Assistant Director; Carissa Bryant; Jennifer Echard; Laurier Fish; Tom Lombardi; and Katherine Trimble. Key contributors for the previous work that this testimony is based on are listed within each individual product. Homeland Security: DHS and TSA Face Challenges Overseeing Acquisition of Screening Technologies. GAO-12-644T. Washington, D.C.: May 9, 2012. Checked Baggage Screening: TSA Has Deployed Optimal Systems at the Majority of TSA-Regulated Airports, but Could Strengthen Cost Estimates. GAO-12-266. Washington, D.C.: April 27, 2012. Transportation Security Administration: Progress and Challenges Faced in Strengthening Three Key Security Programs. GAO-12-541T. Washington D.C.: March 26, 2012. Homeland Security: DHS and TSA Acquisition and Development of New Technologies. GAO-11-957T. Washington, D.C.: September 22, 2011. Aviation Security: TSA Has Made Progress, but Additional Efforts Are Needed to Improve Security. GAO-11-938T. Washington, D.C.: September 16, 2011. Department of Homeland Security: Progress Made and Work Remaining in Implementing Homeland Security Missions 10 Years after 9/11. GAO-11-881. Washington, D.C.: September 7, 2011. Homeland Security: DHS Could Strengthen Acquisitions and Development of New Technologies. GAO-11-829T. Washington, D.C.: July 15, 2011. Aviation Security: TSA Has Taken Actions to Improve Security, but Additional Efforts Remain. GAO-11-807T. Washington, D.C.: July 13, 2011. Aviation Security: TSA Has Enhanced Its Explosives Detection Requirements for Checked Baggage, but Additional Screening Actions Are Needed. GAO-11-740. Washington, D.C.: July 11, 2011. Homeland Security: Improvements in Managing Research and Development Could Help Reduce Inefficiencies and Costs. GAO-11-464T. Washington D.C.: March 15, 2011. High-Risk Series: An Update. GAO-11-278. Washington D.C.: February 16, 2011. Department of Homeland Security: Assessments of Selected Complex Acquisitions. GAO-10-588SP. Washington, D.C.: June 30, 2010. Aviation Security: Progress Made but Actions Needed to Address Challenges in Meeting the Air Cargo Screening Mandate. GAO-10-880T. Washington, D.C.: June 30, 2010. Aviation Security: TSA Is Increasing Procurement and Deployment of Advanced Imaging Technology, but Challenges to This Effort and Other Areas of Aviation Security Remain. GAO-10-484T. Washington, D.C.: March 17, 2010. Aviation Security: DHS and TSA Have Researched, Developed, and Begun Deploying Passenger Checkpoint Screening Technologies, but Continue to Face Challenges. GAO-10-128. Washington, D.C.: October 7, 2009. GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. GAO-09-3SP. Washington, D.C.: March 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses our past work examining the Transportation Security Administrations (TSA) progress and challenges in developing and acquiring technologies to address aviation security needs. TSAs acquisition programs represent billions of dollars in life cycle costs and support a wide range of aviation security missions and investments. Within the Department of Homeland Security (DHS), the Science and Technology Directorate (S&T) and TSA have responsibilities for researching, developing, and testing and evaluating new technologies, including airport checkpoint screening technologies. Specifically, S&T is responsible for the basic and applied research and advanced development of new technologies, while TSA, through its Passenger Screening Program, identifies the need for new checkpoint screening technologies and provides input to S&T during the research and development of new technologies, which TSA then procures and deploys. TSA screens more than 600 million air passengers per year through approximately 2,300 security checkpoint lanes at about 450 airports nationwide, and must attempt to balance its aviation security mission with concerns about efficiency and the privacy of the traveling public. The agency relies upon multiple layers of security to deter, detect, and disrupt persons posing a potential risk to aviation security. Part of its checkpoint security controls include a manual review and comparison by a travel document checker of each persons boarding pass and identification, such as passports or state-issued drivers licenses. However, concerns have been raised about security vulnerabilities in this process. For example, in 2006, a university student created a website that enabled individuals to create fake boarding passes. In addition, in 2011, a man was convicted of stowing away aboard an aircraft after using an expired boarding pass with someone elses name on it to fly from New York to Los Angeles. Recent news reports have also highlighted the apparent ease of ordering high-quality counterfeit drivers licenses from China. We have previously reported on significant fraud vulnerabilities in thepassport issuance process and on difficulties in detecting fraudulent identity documentation, such as drivers licenses. In response to these vulnerabilities, and as part of its broader effort to improve security and increase efficiency, TSA began developing technology designed to automatically verify boarding passes and to better identify altered or fraudulent passenger identification documents. TSA plans for this technology, known as Credential Authentication Technology/Boarding Pass Scanning Systems (CAT/BPSS), to eventually replace the current procedure used by travel document checkers to detect fraudulent or altered documents. However, we have previously reported that DHS and TSA have experienced challenges in managing their acquisition efforts, including implementing technologies that did not meet intended requirements and were not appropriately tested and evaluated, and have not consistently included completed analyses of costs and benefits before technologies were implemented. This testimony focuses on (1) the status of TSAs CAT/BPSS acquisition and the extent to which the related life cycle cost estimate is consistent with best practices and (2) challenges we have previously identified in TSAs acquisition process to manage, test, acquire, and deploy screening technologies. This statement also provides information on issues for possible congressional oversight related to CAT/BPSS. In summary, TSA has completed its initial testing of the CAT/BPSS technology and has begun operational testing at three airports. We found the projects associated life cycle cost estimate to be reasonably comprehensive and well documented, although we are less confident in its accuracy due to questions about the assumed inflation rate. In addition, we could not evaluate its credibility because the current version does not include an independent cost estimate or an assessment of how changing key assumptions and other factors would affect the estimate. Our past work has identified three key challenges related to TSAs efforts to acquire and deploy technologies to address homeland security needs: (1) developing and meeting technology program requirements, (2) overseeing and conducting testing of new screening technologies, and (3) developing acquisition program baselines to establish initial cost, schedule, and performance parameters. |
Smartphones combine the telecommunications functions of a mobile phone with the processing power of a computer, creating an Internet- connected mobile device capable of running a variety of software applications for productivity or leisure. The functioning of a mobile phone involves locating the user, and FCC’s rules enabling enhanced 911 (E911) services require phones to provide GPS-quality location precision for emergency responders. This capability to determine a user’s location has led to smartphones that can provide applications and services that take advantage of location data generated by GPS and other location technologies. Advances in the technology for pinpointing a mobile phone’s location have led to applications that identify a user’s location quickly and with a high-level of precision. Four types of companies are primarily responsible for smartphone products and services in the United States: . Carriers provide smartphone users with access to wireless networks for voice and data uses, generally with a subscription plan. In the United States, four carriers primarily serve customers nationwide: AT&T, Sprint-Nextel, T-Mobile, and Verizon. . Underlying the various functions of a smartphone is an operating system that acts as a mobile computing platform to run the phone’s hardware and software. Three operating systems are most prevalent in the United States: Apple’s iPhone iOS, Google’s Android, and Research in Motion’s BlackBerry. . Smartphones are made by a variety of electronics companies. Apple and Research in Motion manufacture phones based on their own proprietary operating systems. In contrast, a number of other companies, such as HTC, Motorola, and Samsung, make phones based on the Android operating system. . As the popularity of smartphones has grown, so too has the number of developers offering applications for them. New mobile applications are developed every day, with some estimates indicating there are more than a million available as of mid- 2012. These developers range from start-up ventures to large, established Internet companies like Yahoo!, offering products like the Angry Birds game by Rovio Entertainment Ltd., social networking applications like Facebook, navigation tools like Google Maps, and music players such as Pandora Radio. Together, the products and services developed by these various companies allow users to take advantage of the various functions smartphones provide (see figure 1). Smartphones connect with mobile carrier networks for making calls and providing data services. Some smartphones also have the capability to connect with wireless fidelity (Wi-Fi) networks to provide data services. Fair Information Practices (FIP), are widely accepted principles for protecting the privacy and security of personal information. They were first proposed in 1973 by a U.S. government advisory committee. In response to concerns about the potential consequences that computerized data systems could have on the privacy of personal information, the committee was tasked to examine the extent to which limitations should be placed on using computer technology for record These principles, with some variation, have been keeping about people.used by organizations to address privacy considerations in their business practices and are also the basis of privacy laws and related policies in many countries, including the United States. FIPs are not precise legal requirements. Rather, they provide a framework of principles for balancing the need for privacy with other interests. Striking that balance varies among countries and among types of information (e.g., medical and employment information). The Organisation for Economic Co-operation and Development (OECD), an international organization, developed a revised version of the FIPs in 1980 that has been widely adopted (see table 1). The Federal Trade Commission Act prohibits unfair or deceptive acts or practices affecting commerce and authorizes FTC enforcement action.This authority allows FTC to take remedial action against a company that engages in a practice that FTC has found is unfair or deceives customers. For example, FTC could take action against a company if it found the company was not adhering to the practices to protect a consumer’s personal information that the company claimed to abide by in its privacy policy. FTC also enforces the Children’s Online Privacy Protection Act of 1998, which required FTC to promulgate rules governing the online collection of information from children under age 13. The Communications Act of 1934 (Communications Act), as amended, imposes a duty on mobile carriers to secure information and imposes particular requirements for protecting information identified as customer proprietary network information (CPNI), including the location of customers when they make calls. express authorization for access to or disclosure of call location information concerning the user of commercial mobile services, subject to certain exceptions. Carriers must also comply with FCC rules implementing the E911 requirements of the Wireless Communications and Public Safety Act of 1999, including providing location information to emergency responders when mobile phone users dial 911. CPNI includes information that relates to the quantity, technical configuration, type, destination, location, and amount of use of a telecommunications service as well as information contained in the bills pertaining to telephone service. As the Communications Act requirements for CPNI apply only to carriers, they would not apply to other types of companies that collect and use mobile phone location data, such as application developers. 47 U.S.C. § 222(f), (h). The Electronic Communications Privacy Act of 1986 (ECPA) sets out requirements under which the government can access information about a user’s mobile phone and Internet communications. This includes legal procedures for obtaining court orders to acquire information relevant to a law enforcement inquiry. Collecting, using, and sharing location data provides benefits for both mobile industry companies and for consumers. For the companies, the main purposes for using and sharing location data are to provide and improve services, to increase advertising revenue, and to comply with legal requirements. Consumers, in turn, can benefit from these new and improved services and from targeted location-based advertising. Nonetheless, allowing companies to access location data exposes consumers to privacy risks, including disclosing data to unknown third parties for unspecified uses, consumer tracking, identity theft, threats to personal safety, and surveillance. Mobile industry companies determine location information through various methods, such as cell tower signal-based technologies, Wi-Fi Internet access point technology, crowd-sourced positioning, and GPS technology. Assisted-GPS (A-GPS), a hybrid technology that uses more than one data collection methodology, is also widely used. Figure 2 below illustrates these technologies. Since the advent of consumer cellular technology, making and receiving mobile telephone calls has depended on the ability to determine a device’s location from the constant radio communication between the device and the mobile carrier’s cell towers that are spread throughout the carrier’s service area. The ranges of the individual cell towers divide the service area into separate sectors. As the towers are in fixed positions, determining a device’s current cell tower sector tells the carrier the device’s approximate location. The precision of this method depends on how much space a particular tower covers. In general, urban areas have smaller sectors than rural areas because each sector can only manage a certain amount of cell traffic at any one time. Because of increasing cell traffic, the number of cell towers has proliferated to the point that there are now over three times more than there were 10 years ago. As a result, cell sector-based location data are increasingly accurate. Companies can further improve accuracy by using triangulation methods, which determine location through the mathematical comparison of a device’s signals that reach more than one cell tower. Cell tower triangulation can now yield results within 50 meters of accuracy. Mobile carriers that provide Wi-Fi access points to their customers can use these access points to determine location. Like cell towers, Wi-Fi access points are fixed locations and send out signals over a limited range. Specifically, Wi-Fi signals are radio waves that provide Internet access to devices equipped with compatible wireless hardware. Each Wi- Fi access point is identified by a unique hardware address. Nearby compatible devices are able to receive this information and use it to request Internet access. Since a Wi-Fi access point’s range is limited to a few hundred meters, accurate location data can be determined if a device communicates with the access point. Companies such as Google, Apple, and Skyhook use information gathered from users’ mobile devices about cell tower and Wi-Fi access point signals, as well as the Wi-Fi signals of other companies and households, to determine location. These companies compile the precise locations of these signals into large databases, which the companies may then license to other entities such as application developers. An application installed on a mobile device can obtain location information by querying one of these databases, which will use its knowledge about those signals’ locations to return the device’s location. The database can also use location information sent by the device to update its records. If there are any new signals in the device’s vicinity or any old signals that are no longer broadcasting, the database can incorporate those changes in its records. While the exact degree of accuracy ultimately depends on how many signal points are near the device when it queries a database, companies use crowd-sourced positioning because it provides accurate location data quickly, and because it does not rely on GPS technology, which is not available in all mobile devices. GPS is used by both carriers and non-carriers to determine a device’s location. GPS technology is based upon satellite signals, which are picked up and interpreted by devices equipped with GPS receiver chips. The device then measures the time it takes for it to receive various satellite signals and triangulates its location. Triangulating GPS satellite signals can yield data accurate to within 10 meters. A-GPS is a hybrid approach used to overcome certain limitations in GPS technology: namely, that GPS usually only works outside buildings, may take several minutes to determine location, and uses more battery power than other location determination methods. By using GPS in conjunction with any of the previously described methods of collecting location data, the assisting technology can report an approximate location to the application or service while GPS works to obtain a more precise location. For instance, operating system and application developers may use crowd-sourced positioning databases to provide approximate locations to their users until GPS signals are successfully triangulated. The precision of A-GPS in these circumstances depends on the accuracy of the assisting method. There are three main reasons that mobile industry companies collect and share location data: 1) to provide and improve services, 2) to increase advertising revenue, and 3) to comply with court orders. Mobile industry companies use location data to provide and improve services. As stated above, a carrier needs to know a device’s location to provide basic mobile telephone services. In addition, carriers and application developers offer a diverse array of services that make use of location information, such as services providing navigation, the ability to keep track of family members, local weather forecasts, the ability to identify and locate nearby businesses, and social networking services that are linked to users’ locations. To provide these services, carriers and developers need the ability to quickly and accurately determine location. Location data can also be used to enhance the functionality of other services that do not need to know the user’s location to operate. Search engines, for example, can use location data as a frame of reference to return results that might be more relevant. For instance, if a user were to search for a pizza restaurant using a location-aware search engine, the top result may be a map of nearby pizza restaurants instead of the homepage of a national chain. Companies also collect and examine location information in conjunction with other diagnostic usage data to analyze and improve their interactions with customers. By examining the location patterns of dropped calls, for example, carriers can identify network problems and address cell connectivity issues without having to rely on customer complaints. Furthermore, companies may use location data to provide public services. For example, carriers are responsible for providing law enforcement and other first responders with the location data of people who dial 911 from their mobile devices. This service is referred to as E911 and it is mandated by law. In addition, companies may provide location information to municipalities to improve city traffic management or facilitate city planning. Location data can also be used to help find missing children through mobile America’s Missing: Broadcast Emergency Response (AMBER) alerts, which can be sent to devices that have requested AMBER alerts, when the devices are located within a specified radius of a reported incident. Companies can use location data to target the advertising that users receive through mobile devices. Doing so may make an advertisement more relevant to a user than a non-targeted advertisement, boosting advertising revenue. Advertising is particularly important to application developers, as many developers give their products away free and rely on advertising for revenue. Advertisements for a certain business may be triggered if a user’s device is located within a predetermined distance from that business. Any application, regardless of its function, may collect and use location data for advertising purposes. Furthermore, application developers, operating system developers, and mobile carriers may aggregate and store individual user data to create user profiles. Profiles can be used to tailor marketing or service performance to an individual’s preferences. In addition to capturing and using the location data of individual users, companies such as application developers and mobile carriers sell large amounts of de-identified location data to third parties. When data are de-identified, they are stripped of personally identifiable information.data are often aggregated, which means that the data of many users are combined. Aggregation also makes it more difficult to distinguish the data of individuals. De-identified and aggregated data can be used for a variety of purposes, including marketing and research. In addition to de-identification, user Mobile industry companies are legally required to share user location data in response to a court order if a court finds that the information is warranted for law enforcement purposes. Because users generally carry their mobile devices with them, law enforcement can use device location data to determine the user’s location. Because of this correlation, location data are valuable to law enforcement for tracking the movements of criminal suspects. Of particular use are the location data either housed in mobile carrier databases or obtained through GPS technology. Mobile carriers are required to comply with court orders directing the disclosure of historical location data (i.e., where the device was in the past) and in certain circumstances, real-time location data (i.e., where the device is now). Many services that use location data were designed to make tasks easier or quicker for the customer, and the sharing of location data can improve customer experiences, reduce consumer costs, and help provide improved public services. Nonetheless, location data use and sharing may pose privacy risks, which include unknown third-party use, consumer tracking, identity theft, threats to personal safety, and surveillance. Consumers can benefit from mobile industry use of their location data because many location-based services are designed to make their lives easier and safer. For instance, navigation services enable users to easily find directions and take the guesswork out of finding the best or quickest routes, while applications designed to track family members enable parents to be aware of their children’s whereabouts. An application may also use location data to personalize its usual services; for example, by using a location-aware business directory, a user may be able to rank search results by distance to save time and quickly reach the nearest location. Furthermore, as stated previously, the sharing of location data facilitates a faster response from emergency services through E911 and allows companies to identify network service problems. Additionally, consumers may derive economic benefits from the sharing of their location data. For example, because many application developers depend on location-based advertising for revenue, users may be able to download applications for free or at a low cost. Furthermore, location- based advertising allows for targeted advertisements and offers to be sent to consumers, who may find them useful. For example, a user at lunchtime may receive and use a coupon for a local restaurant. By allowing companies to access their location data, users expose themselves to privacy risks. These risks include, but are not limited to, disclosure to unknown third parties for unspecified uses, consumer tracking, identity theft, threats to physical safety, and surveillance. According to privacy advocates, when a user agrees to use a service that accesses location data, the user is unlikely to know how his or her location data may be used in ways beyond enabling the service itself. The secondary uses of location data are generally not transparent to the Therefore, location data may be shared with third parties consumer.unknown to the consumer. Generally speaking, once location data are shared with a non-carrier, consumers have a limited ability to know about or influence the data’s use. Third parties that receive shared location information may vary in the levels of security protection they provide. If any of these entities has weak system protections, there is an increased likelihood that the information may be compromised. According to the congressional testimony of a privacy researcher, privacy notices rarely differentiate between first- and third-party data uses and generally do not reveal specific business partners such as advertising networks, thus making it difficult for consumers to understand privacy risks. Because consumers do not know who these entities are or how they are using consumers’ data, consumers may be unable to make meaningful choices and judge whether they are disclosing their data to trustworthy entities. When mobile location data are collected and shared, users may be tracked for marketing purposes without their consent. Since users often carry their mobile devices with them and can use them for various purposes, location data along with data collected on the device may be used to form a comprehensive record of an individual’s activities. Amassing such data over time allows for the creation of a richly detailed profile of individual behavior, including habits, preferences, and routines—private information that could be exploited. Furthermore, since non-carriers’ use of location data is unregulated, these companies do not have to disclose how they are using and sharing these profiles. Consumers may believe that using these personal profiles for purposes other than providing a location-based service constitutes an invasion of privacy, particularly if the use is seen as contrary to consumers’ expectations and results in unwanted solicitations or other nuisances. Identity theft occurs when someone uses another person’s personal or financial information to commit fraud or other crimes. When sensitive information such as location data is disclosed, particularly when it is combined with other personal information, criminals can use this information to steal identities. The risk of identity theft grows whenever entities begin to collect data profiles, especially if the information is not maintained securely. By illicitly gaining access to these profiles, criminals acquire information such as a user’s name, address, interests, and friends’ and co-workers’ names. In addition, a combination of data elements—even elements that do not by themselves identify anyone, such as individual points of location data—could potentially be used in aggregate to discern the identity of an individual. Furthermore, keeping data long-term, particularly if it is in an identifiable profile, increases the likelihood of identity theft. When mobile location data are collected and shared, users could be put at risk for personal threats if the data are intercepted by people who mean them harm. This is a potential concern for those people who do not want specific individuals to know where they are or how to find them, such as victims of domestic violence. Location data may be used to form a comprehensive record of an individual’s movements and activities. If disclosed or posted, location data may be used by criminals to identify an individual’s present or probable future location, particularly if the data also contain other personally identifiable information. This knowledge may then be used to cause harm to the individual or his property through, for instance, stalking or theft. Access to location information also raises child safety concerns as more and more children access mobile devices and location-based services. According to the American Civil Liberties Union (ACLU), location updates that users provide through social media have been linked to robberies, and GPS technology has been involved in stalking cases. Law enforcement agencies can obtain location data via court order, and such data can be used as evidence. However, according to a report by the ACLU, law enforcement agents could potentially track innocent people, such as those who happened to be in the vicinity of a crime or disturbance. For example, the ACLU reported in 2010 that Federal Bureau of Investigation agents investigating a series of bank robberies sought the records of every mobile phone that was near each bank when it was robbed. Furthermore, law enforcement agencies access location data frequently, access that could add to concerns about the potential for misuse. For example, in May 2012, Sprint-Nextel reported that it had received over 196,000 court orders for location information over the last 5 years. Users generally do not know when law enforcement agencies access their location data. In addition to information related to a crime, the location data collected by law enforcement may reveal potentially sensitive destinations, such as medical clinics, religious institutions, courts, political rallies, or union meetings. Mobile industry associations and privacy advocacy organizations have recommended practices for industry to better protect consumers’ privacy while making use of customers’ personal information. Companies we examined have developed privacy policies to disclose information to consumers about the collection of location data and other personal information, but have not consistently or clearly disclosed to consumers what the companies are doing with these data or which third parties they may share them with. Industry associations and privacy advocacy organizations have recommended practices for the mobile industry to better protect consumers’ privacy while making use of their personal information. These recommended practices include actions to notify users about the collection and use of their location data, ways users can control data collection, safeguards for user data, and actions to demonstrate accountability. The recommended practices we identified generally align with the FIPs discussed earlier. For example, providing users with controls allowing them to opt in or opt out of having their location data collected aligns with the FIP principles of collection limitation, use limitation, and individual participation, since such controls allow users to limit the collection and use of their personal information while providing them greater ability to be informed about and control how their data are used. Specific examples of recommended practices are shown in table 2. Although companies we examined have taken steps to protect the privacy and security of location data, they have not done so consistently, and their actions sometimes fall short of the recommended practices we identified. The 14 mobile industry companies we examined reported actions to inform users about the collection, use, and sharing of their location data primarily through disclosures in their privacy policies.Companies also disclosed information about ways consumers could control location data collection, how long companies retain location data, how companies safeguard the data, and companies’ measures to demonstrate accountability, although how companies addressed these issues varied. While companies’ disclosures routinely informed consumers that their location data were being collected, companies’ disclosures did not consistently or clearly explain the purposes behind such collection or identify which third parties these data might be shared with. Recommended practices state that companies should clearly disclose to consumers the collection and use of location data and purpose for doing so. We found that while companies used privacy policies to inform users about location data collection, information about use and sharing was sometimes unclear. All 11 of the mobile carriers, operating system developers, and application developers we examined had privacy policies. Ten of the 11 privacy policies we examined disclosed that the company collected consumers’ location data. However, some policies were not clear about how the companies used location data. For example, the privacy policies of 4 of the companies we examined stated ways the companies used “personal information,” but did not state whether location data were considered “personal information.” It was therefore unclear whether these uses applied to location data. Companies’ policies on whether location data were considered personal information varied. Apple’s privacy policy, for example, stated that it considered location data to be nonpersonal information. In contrast, T- Mobile’s policy stated that location is personally identifiable information. Furthermore, representatives from four of the companies told us that whether location data is considered personal information depends on factors such as how precise the data are and whether they are combined The operating system developers with other information about the user.reported they collected location data in an anonymous manner or took steps to de-identify stored data. In contrast, 3 of the application developers we interviewed stated they stored location data with other personal information about their users. Carriers told us that their practices varied, depending on the specific use of the data. Recommended practices state that companies should inform consumers about third parties the companies share consumers’ data with and the purposes for doing so. Most policies we examined stated the types of third-party companies location data may be shared with, such as application developers and advertisers; however, some policies described third parties with vague terms such as “trusted businesses” or “others.” Although some policies stated that the company takes steps to protect this information, such as requiring the third party to follow the company’s privacy policy, others made no such statement, and one company’s policy said it would not be liable if the third party it shares data with fails to protect it. According to literature examining mobile applications, some applications lack privacy policies and consumers often do not know which companies may receive their personal information after it has been collected by the applications. Companies also used other methods in addition to privacy policies to inform consumers about location data collection and use, including some methods that informed consumers directly through their phones. For example, some smartphone screens display an icon to indicate when location information is actively being used. Recommended practices state that companies should obtain users’ consent for collecting, using, and sharing personal information, including location data and explain related controls to users. Companies we contacted reported providing methods for users to control collection and use of location data, but the methods and amount of control varied. Most of these companies indicated that users could control smartphones’ use of their location data from the phone; however, the ability to control this varied by operating system, with some providing more options. While all of the operating system developers we examined allowed a user to have location access turned on or off for all applications, some gave users the ability to control whether specific applications could have access to location data. According to the literature we reviewed that examined mobile applications, controls within applications, if available, were sometimes difficult to find. Mobile carriers told us that they do not allow users to control collection of location data for providing basic phone service, since having location data is necessary to provide that service. All the companies we examined that collected data for providing location- based services indicated that users must first provide consent before location-based services use their location; however, privacy policies we examined did not always explain how users’ consent is obtained. Companies told us that a smartphone seeks permission from the user to use location when the user installs an application that makes use of location or the first time the user activates such an application. For example, the iPhone iOS operating system displays a pop-up window the first time a user activates a new application that includes location-based services. The pop-up states that the application is seeking to use the user’s location and allows the user to accept or decline. Similarly, Android smartphones notify users that an application will use location at the time a user downloads a new application and seeks user consent through this process. The recommended practices we reviewed state that companies should not keep personal information such as location data longer than needed, and some organizations encouraged companies to state a specific data retention time frame. However, 7 of the 11 privacy policies we reviewed did not include a statement about how long the company kept location data. Officials from most companies told us they kept location data only as long as needed for a specific purpose; however, in some cases, this could mean keeping location data indefinitely. The carriers we interviewed named specific time periods for location data retention, which they said varied depending on the specific uses of the data, and reported a range of time from a few days to 3 years after the duration of time a user is a customer with the company. Three companies indicated they kept location data indefinitely, and representatives from one company said they had not established a retention time period. Privacy advocates raised data retention as a particular concern, since the longer companies retain location data, the more likely the potential for misuse. Similarly, FTC’s March 2012 report on protecting consumers’ private information stated that companies should delete location data as soon as possible, consistent with the services they provide to consumers. Recommended practices consistently stated the need for companies to safeguard collected user data. Companies reported actions to safeguard users’ location data, but practices for how data are safeguarded varied. All the companies we examined reported ways they safeguard users’ personal information. For example, all of the privacy policies stated that companies had general security measures in place to protect personal information against loss, theft, or misuse. Specific practices reported by some companies included data encryption, erecting firewalls, and restricting employee access.whether these protections covered location data. As stated above, some privacy policies did not state whether location was considered a form of personal information, and thus it was unclear whether stated safeguards for personal information applied to location data. In some cases, however, it was not clear Most of the recommended practices expressed the need for companies to demonstrate accountability for their practices. However, companies’ privacy policies reported few, if any, specific measures for accountability. Five of the 11 privacy policies included general statements that employees were accountable for following the company’s policies as outlined in the privacy policy. A few privacy policies also mentioned that the company followed recommended practices; one carrier’s policy stated the company followed recommended practices developed by CTIA-The Wireless Association (CTIA), a nonprofit organization representing mobile carriers and other wireless companies, and 3 companies’ policies stated their privacy practices had been certified by TRUSTe, a company that helps companies address privacy issues. Three of the carriers also told us they use their contracts with third parties they share users’ personal data with to require those third parties to adhere to CTIA recommended practices for location data. Operating system developers reported varying steps to encourage or require developers of applications that run on their systems to inform users and obtain consent before using their location data. For example, in 2011, Apple stated that it would reject applications from its on-line store that do not obtain consent from the user before collecting, transmitting, or using a user’s location data and that such use must be directly relevant to the features and services provided by the application. In contrast, Google stated that it does not control the behavior of third-party applications in handling location data, but encourages the developers to follow common privacy practices, such as giving users a choice regarding data collection and collecting only necessary information. Companies’ inconsistent adherence to recommended practices increases the likelihood that users could be exposed to the privacy risks we discussed previously. For example, because companies have not made clear and consistent disclosures about how they use and share location data, consumers may be unaware which third parties are using their location data (or that third parties are using it at all) and that law enforcement may obtain their location data and use it for surveillance. Furthermore, because consumers are expected to rely on these disclosures when judging whether they should give consent to a company to access their location, consumers may be providing such consent without complete knowledge of how their data will be used. For example, although privacy policies generally discussed that users’ data could be shared with third parties, they sometimes included vague statements like “trusted business partners” rather than specifying the types of companies they shared the data with and the reasons for doing so. Consequently, users lack sufficient information to adequately judge whether they should trust those companies with their personal information. Privacy advocates we spoke to acknowledged that companies have taken some positive steps to protect privacy, but that the current framework of self-regulation is exposing consumers to unnecessary risks. These advocates said that companies are generally disclosing to users that they will collect location data; however, they are not adequately informing consumers about the uses of the data they collect, including with whom they are sharing the data. These advocates also expressed concern about companies retaining location data longer than necessary, which puts the data at increased risk of inappropriate use. Furthermore, they told us the current framework of self-regulation is insufficient to address these concerns because there are no requirements for companies to consistently implement recommended practices to protect privacy. Federal agencies that have examined location-based services have also noted that the benefits from such services come with concerns. For example, FCC, in its 2012 report on location-based services, noted that such services are expected to deliver $700 billion in value to consumers and business users over the next decade. However, in summarizing views of participants in a 2011 panel discussion, the FCC report noted that panelists found inconsistency in the privacy notices provided by companies and incomplete disclosure of the ways location data are used. Specifically, the report states that while consumers may have clear notice that an application will collect and use data on their location, these data may be subsequently used in ways that are not transparent to consumers or shared with third parties without consumers’ consent. FTC, in its report on protecting consumer privacy, noted that the unauthorized disclosure to third parties of sensitive personal information such as precise location data raises privacy concerns resulting from the unanticipated uses of these data. Federal agencies that have responsibility for consumer privacy protection or that interact with the mobile industry have taken steps to promote public awareness, such as providing educational outreach and recommending actions aimed at improving consumer privacy. However, additional actions could be taken to further protect consumers. For example, NTIA has not defined performance goals for its proposed multistakeholder process, which consists of different groups involved with consumer privacy coming together to discuss relevant issues with the goal of developing codes of conduct for consumer privacy. Additionally, FTC has not issued comprehensive guidance to mobile industry companies with regard to actions they should take to protect mobile location data privacy. Several federal agencies that interact with the mobile industry or have responsibilities for consumer privacy protection have provided educational outreach to the public, developed reports with recommendations aimed at protecting consumer privacy, developed regulatory standards that address mobile-location data privacy, and developed guidance for law enforcement on obtaining mobile location data. FCC and FTC have held educational outreach events, and FTC has developed a fact sheet to educate the public on various privacy issues related to location-based services. In June 2011, the agencies collaborated to hold a public education forum that explored how consumers can be both knowledgeable and secure when utilizing location-based services. Participants in the forum included representatives from mobile carriers, technology companies, consumer advocacy groups, and academia. Specific topics discussed included how location-based services work; what parents should know about location tracking when their children trends, benefits, and risks of location-based services; industry recommended practices; and use mobile devices. Also in June 2011, FTC issued an informational fact sheet that provided basic information on mobile applications and answered questions on privacy, advertising, and security concerns. Specific topics included the types of data that applications can access on users’ devices, the reasons a user’s phone collects location data, and ways that applications can cause harm to a user’s phone. In May 2012, FTC held a public workshop on advertising and privacy disclosures to discuss the need for new guidance for online advertisers about making disclosures. Participants included consumer advocates, representatives of industry groups, and academics. The workshop covered topics including when, where, and how required disclosures should be made; the techniques to increase or decrease the likelihood that consumers will actually read a required disclosure; the challenges and best approaches to making adequate disclosures given the screen size constraints of mobile devices; and the steps companies can take to communicate with consumers in a clear and consistent way about the companies’ privacy practices. In August 2012, FTC issued guidance for application developers to help developers comply with truth-in-advertising standards and basic privacy principles. The guidance discusses the need for developers to be clear to users about companies’ practices to collect and share data, to offer users ways to control how their personal information is collected and shared, and the need to keep users’ data secure, among other issues. Several agencies have issued or prepared reports that offered recommendations aimed at improving consumer privacy, including location-based services. In February 2012, NTIA prepared a report for the White House on protecting privacy and promoting innovation in the global digital economy.companies that use personal data. The framework includes a consumer privacy bill of rights, a multistakeholder process to specify how the principles in the bill of rights apply in particular business contexts, and effective enforcement. The report also urged Congress to pass consumer data privacy legislation that would, among other things, codify the consumer privacy bill of rights described in the report, grant FTC authority to enforce the bill of rights, and create a national standard under which companies must notify consumers of unauthorized disclosures of certain kinds of personal data. The report offered a framework and expectations for Also in February 2012, FTC issued a report on privacy disclosures for mobile applications aimed at children.information available to parents prior to downloading mobile applications for their children and called on the mobile industry to provide greater This report highlighted the lack of transparency about their data practices. The report recommended, among other things, that all companies that are involved in developing children’s applications—the application stores, developers, and third parties providing services within the applications—should play an active role in providing key information to parents who download applications through simple, short disclosures that are easy to find and understand on the small screen of a mobile device. In March 2012, FTC issued another report that laid out recommendations for businesses and policy makers aimed at protecting consumer privacy.The report described recommended practices for companies that collect and use consumer data to develop and maintain processes and systems to implement privacy and data security practices. These practices include promoting consumer privacy at every stage of the development of products and services, and giving consumers greater control over the collection and use of their personal data through simplified choices and increased transparency. The report also included recommendations to companies that make use of precise mobile location data, including that they should obtain affirmative express consent from consumers before collecting precise location data; limit collection to data needed for a requested service or transaction; establish standards that address data collection, transfer, use, and disposal, particularly for location data; and, to the extent that location data are collected and shared with third parties, work to provide consumers with more prominent notice and choices about such practices. The report also called on Congress to consider enacting baseline privacy legislation, reiterated FTC’s call for legislation governing data security and data broker issues, and urged the industry to accelerate the pace of self-regulation. Federal Communications Commission, Location-Based Services: An Overview of Opportunities and Other Considerations (Washington, D.C.: May 25, 2012). is taking to respond to these challenges, and new issues that continue to emerge in this area. There have been three relevant regulatory actions in the area of protecting mobile location data. In 1998, FCC, implementing requirements of section 222 of the Communications Act, as amended, developed rules to protect CPNI; subsequently, the law was amended to clarify that CPNI includes subscribers’ call location data that carriers use to provide telecommunications services. As previously discussed, FCC’s regulations limit instances where CPNI can be used or disclosed without customer consent. In November 2000, CTIA proposed the adoption of location information privacy principles that covered the issues of notice, consent, security and integrity of information, and technology neutrality and urged FCC to conduct a rulemaking separate from its general CPNI proceeding, based on CTIA’s assessment that the location privacy question is uniquely a wireless concern. In July 2002, FCC declined to initiate a rulemaking because it opined that the amendments to the Communications Act imposed protections for consumers, such as requiring express approval before carriers can use consumers’ location information. The Commission decided that rules would be unnecessary and potentially counterproductive because of the still-developing market for location-based services and that CTIA’s proposed privacy principles could be adopted by mobile industry companies on a voluntary basis. In September 2011, FTC proposed amending its rule pertaining to the Children’s Online Privacy Protection Act that would revise the definition of personal information to explicitly include location data. According to FTC officials, there is no time frame for the issuance of a final rule in this proceeding, as the Commission is still in the process of evaluating comments. In June 2012, FCC solicited comments regarding the privacy and data security practices of mobile wireless service providers with respect to customer information stored on their users’ mobile communications devices, which could include location information, and the application of existing privacy and security requirements to that information. Commission last solicited public input on this question 5 years ago and technologies and business practices in this area have changed, the Commission sought comments on a variety of issues including: the applicability and significance of telecommunications carriers’ duty under section 222(a) of the Communications Act to protect customer information stored on their users’ mobile communications devices; whether the definition of CPNI could apply to information collected at a carrier’s direction even before it has been transmitted to the carrier; what factors are relevant to assessing a wireless provider’s obligations under section 222 of the Communications Act, as amended, and the Commission’s implementing rules, or other provisions of law within the Commission’s jurisdiction, and in what ways; what privacy and security obligations should apply to customer information that service providers cause to be collected by and stored on mobile communications devices; and what should be the obligations when service providers use a third party to collect, store, host, or analyze such data. 77 Fed. Reg. 35336 (June 13, 2012). Justice has developed guidance on how law enforcement may obtain which is primarily obtained through various court mobile location data,orders. These methods have been the subject of recent litigation. There are various methods in which mobile location data can be obtained, including, but not limited to: Warrant: A warrant allows law enforcement to obtain prospective mobile location data generated by GPS or similar technologies (i.e., where the device is currently located). To obtain a warrant for these data, the government must establish probable cause to believe that the data sought will aid in a particular apprehension or conviction. This method requires the highest standard of evidence of all methods outlined below. Section 2703(d) Court Order: A 2703(d) court order allows law enforcement officials to obtain certain kinds of historical mobile location data (i.e., where the device was located in the past) that providers collect for business purposes.government must offer specific and articulable facts showing that there are reasonable grounds to believe that the data are relevant and material to an ongoing criminal investigation. To obtain this order, the Hybrid Order: Justice has routinely acquired, since at least 2005, certain categories of prospective mobile location data generated by cell tower information through the combination of two court orders, the Pen/Trap court order and the 2703(d) order. The combination order is known as a “hybrid order.” To obtain this order, law enforcement officials must affirm that the information likely to be obtained is relevant to an ongoing criminal investigation and further demonstrate specific and articulable facts showing that there are reasonable grounds to believe that the information sought is relevant and material to an ongoing criminal investigation. This order is used because the Communications Assistance for Law Enforcement Act of 1994 precludes law enforcement officials from relying solely on the authority of the Pen/Trap statute to obtain cell tower data for a mobile customer. Section 2702 Voluntary Disclosure: Communications providers are permitted by law to voluntarily disclose information to law enforcement if the provider, in good faith, believes that an emergency involving danger of death or serious physical injury to any person requires disclosure without delay of communications relating to the emergency. As already described, law enforcement agencies access location data frequently using these various authorities. Law enforcement’s use of location information has spurred courts to review government actions to compel third parties to disclose location data, as judges question and examine what legal standards govern law enforcement access to historical and prospective location information. For example, in 2010, a federal district court in Texas denied government applications for historical cell site data, declaring that compelled warrantless disclosure of cell site data violates the Fourth Amendment. In contrast, in 2012, a federal district court in Maryland upheld the government’s use of historical cell site data, concluding that the privacy issues surrounding the collection of historical cell site location records are best left for Congress to decide. Concerns have been raised by privacy advocacy groups about the methods law enforcement can use to obtain location data. For example, the ACLU has opined that existing privacy laws fail to provide adequate legal protections for the increasingly detailed information that is collected by location-based services about consumers’ physical locations and that consumers, location-based service providers, and the government are thus acting in uncertain legal territory. Further, most of the privacy advocates we spoke to opined that the government should obtain a warrant based on probable cause of a crime before it tracks, prospectively or historically, the location of a mobile phone or other mobile communications device. This approach seeks to treat historical and prospective location information equally and would require law enforcement to meet a higher standard before obtaining access to any location data. Our Standards for Internal Control in the Federal Government, in conjunction with the Government Performance and Results Act of 1993, state that agencies should set performance goals with specific timelines and measures for program performance. These documents assert that in order to better articulate a results orientation, agencies should create a set of performance goals and measures that addresses important dimensions of performance. They also assert that agencies should use intermediate goals and measures to show progress or contribution to intended results, while including explanatory information on the goals and measures. Following the February 2012 report on consumer privacy, NTIA began implementing a multistakeholder process, which includes, among other groups, individual companies, industry groups, privacy advocates, and consumer groups. The purpose of the process is to develop codes of conduct that implement the general privacy principles presented in the report and that would be enforceable by FTC if the codes are publicly and affirmatively adopted by mobile industry companies. NTIA believes that the proposed process can provide the flexibility, speed, and decentralization necessary to address policy challenges by facilitating participants’ working together to find creative solutions. NTIA also stated that another key advantage of the multistakeholder process is that it can produce solutions in a more timely fashion than a regulatory process. NTIA officials stated that because they are in the beginning stages of defining what the overall process would entail, they could not provide specific information about procedures, deliverables, or time frames. The first session was held on July 12, 2012, and addressed how companies providing applications and interactive services for mobile devices can be transparent about how the companies handle personal data. Officials stated that since the sessions will be driven by the stakeholders, they were unsure if the sessions would cover location data; however, in its comments responding to a draft of this report, NTIA stated that it appears likely stakeholders will address transparency of mobile location- based services based on the topic of conversation at the July meeting. NTIA officials said they planned to hold further discussion sessions, where stakeholders would meet to address distinct issues, but all of the topics have not yet been identified and would be based on recommendations from the stakeholders. Officials stated there is no defined timeline for the remaining discussion sessions or the development of the guiding principles, although in August 2012, NTIA indicated that seven meetings had been scheduled before the end of 2012. Lacking defined performance goals, milestones, and deliverables, it is unclear whether NTIA’s multistakeholder process will establish an effective means for addressing mobile location data privacy issues. NTIA officials stated that individual companies’ compliance with the codes of conduct produced through the process would be voluntary and that it is uncertain that the process will yield company self-regulations or a third- party monitored code. If companies do not volunteer to follow any resulting principles, enforcement would depend on whether a company’s failure to adhere to the agreed-upon practices could be considered an unfair practice. As such, the proposed process does not include any mechanism for enforcing compliance with the guiding principles that may be developed, and NTIA cannot offer any assurance that the results of the process will lead to significant adoption of these principles. FTC has the authority to take legal action against a company that engages in unfair acts affecting commerce, such as companies engaging in unfair business practices that are likely to cause substantial injury to consumers, which are not reasonably avoidable by consumers themselves. FTC has begun to address mobile location issues by holding public workshops and by releasing a report that laid out recommendations aimed at protecting consumer privacy. It has also developed some guidance for companies that collect, use, and share mobile location data, such as including recommendations on location data collection in its March 2012 consumer data privacy report, including recommendations on improving disclosures to parents about the collection and use of personal information by applications geared toward children in its February 2012 report on that subject, and issuing guidance for application developers regarding collection and use of location data in August 2012. While these various guidelines touch on a number of issues related to mobile location data privacy, FTC has not published comprehensive industry guidance on its views of appropriate actions by mobile companies with regard to privacy. Specifically, by publishing an industry guide for these companies, FTC could help clarify for mobile companies its views on the appropriate actions for protecting privacy of consumers’ location data. Doing so could help set expectations for industry on appropriate steps to protect consumers’ privacy if the issue has not been adequately addressed through the development and adoption of industry codes or the enactment of legislation. Such guidance could also clarify for companies circumstances under which FTC might take enforcement action against unfair acts. The use and sharing of mobile location data offer benefits to mobile industry companies and consumers, such as providing and improving services and increasing advertising revenue. Nonetheless, these activities can also pose several risks to privacy, including disclosing data to unknown third parties for unspecified uses, consumer tracking, identity theft, threats to personal safety, and surveillance. While mobile industry associations and privacy advocacy organizations have recommended practices for industry to better protect consumers’ privacy while making use of customers’ personal information, these practices are not mandatory for the companies to implement. Mobile industry companies we examined have inconsistently implemented these practices. In particular, the lack of clear disclosures to consumers about how their location data are used and shared means that consumers lack adequate information to provide informed consent about the use of these data. Consumers are therefore unable to adequately judge whether the companies with which their data are shared are putting their privacy at risk. A key federal effort to address these privacy risks is NTIA’s planned multistakeholder process, which seeks to develop industry codes of conduct. However, NTIA has not defined the effort’s performance goals, milestones, or deliverables. It is therefore unclear if this process will address the risks to privacy associated with the use and sharing of mobile location data. While NTIA recommended that FTC should be granted the authority to enforce any industry codes of conduct that are developed from the multistakeholder process, the current process relies on the industry’s voluntary compliance with resulting codes of conduct before FTC could enforce the provisions. Regardless of what results from the multistakeholder process, FTC has authority to take action against companies that engage in unfair and deceptive practices. However, FTC has not issued comprehensive industry guidance establishing its views on the appropriate actions that mobile companies should take to protect consumers’ mobile location data privacy. Without clearer expectations for how industry should address location privacy, consumers lack assurance that the aforementioned privacy risks will be sufficiently mitigated. To address privacy risks associated with the use and sharing of mobile location data, we recommend that the Secretary of Commerce direct NTIA, in consultation with stakeholders in the multistakeholder process, to develop specific goals, time frames, and performance measures for the multistakeholder process to create industry codes of conduct. To further protect consumer privacy, we recommend that the Chairman of FTC consider issuing industry guidance that establishes FTC’s views of the appropriate actions by mobile companies with regard to protecting mobile location data privacy. In developing the guidance, FTC could consider inputs such as industry codes developed through the NTIA multistakeholder process, recommended practices from industry and privacy advocates, and practices implemented by mobile industry companies. We provided drafts of this report to Commerce, FCC, FTC, and Justice for comment. We also provided relevant portions of the draft to mobile industry companies for comment. We received technical clarifications from all of the agencies and some of the companies, which we incorporated into the report as appropriate. FCC and Justice did not provide comments on the draft. Commerce provided written comments on a draft of this report, which appear in appendix II. The department disagreed with our recommendation to develop specific goals, time frames, and performance measures for the multistakeholder process to create industry codes of conduct to address privacy risks associated with the use and sharing of mobile phone location data. Specifically, Commerce’s letter stated that while NTIA worked with stakeholders to establish a framework that encourages meaningful progress, it is not the agency’s role to dictate timelines and deliverables, and that to do so could be counterproductive. We continue to believe that setting goals and time frames for the process could provide stakeholders and consumers with better assurance that the process will indeed result in the timely creation of industry codes to address privacy issues, as called for in the report on consumer privacy that NTIA prepared and that was released by the White House in February 2012. Furthermore, in its letter, Commerce acknowledged NTIA’s role in setting a date and selecting a topic for the first multistakeholder process convened in July 2012 and a second process planned to begin in the fall. Thus, we believe it is reasonable to suggest that within its role to initiate and facilitate these meetings, NTIA could work with stakeholders to prioritize consideration of mobile phone location data privacy so that this issue, which, as we previously discussed, has been identified as a particular area of concern by privacy advocates and government agencies, is addressed in a timely manner. We have also revised the wording of the recommendation to state that NTIA’s efforts should be done in consultation with the appropriate stakeholders involved in the multistakeholder process to develop industry codes of conduct. FTC provided written comments on a draft of this report, which appear in appendix III. In its letter, FTC stated that it agreed that additional guidance for industry on mobile location data practices would be useful and stated that the agency will continue efforts to inform and guide the industry on best practices for mobile location data. However, FTC also raised concerns with our draft recommendation calling for such guidance to help inform mobile companies how FTC would enforce the prohibition against unfair acts pursuant to the Commission’s authority under the Federal Trade Commission Act to take enforcement action against a company that engages in unfair acts affecting commerce. FTC stated that what constitutes unfair facts or practices is determined by statute and the test for determining what is an unfair practice is inherently fact specific in an area in which technology is changing rapidly. It concluded, therefore, that its business guidance efforts may not necessarily be tied to determinations of what is unfair. Consequently, we modified the wording of our recommendation to FTC to focus on the need for FTC to clarify for mobile industry companies its views on appropriate actions companies should take to protect mobile location data privacy. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the relevant agencies. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Mark L. Goldstein at (202) 512-2834 or goldsteinm@gao.gov, or Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objectives were to examine (1) how mobile industry companies collect location data, why they use and share these data, and how this affects consumers; (2) the types of actions private sector entities have taken to protect consumers’ privacy and ensure security of location data; and (3) the actions federal agencies have taken to protect consumer privacy and what additional federal efforts, if any, are needed. To address all of the objectives, we examined the practices of mobile industry companies involved in the collection and use of location data; specifically mobile carriers, operating system developers, smartphone manufacturers, and application developers. We selected the carriers, operating system developers, and manufacturers with the largest market shares in the United States and the application developers using data on the most popular applications for the two operating systems with the largest market share, Apple iOS and Google Android. See table 3 below for a list of the companies we examined. We reviewed and analyzed selected companies’ privacy policies and other publicly available documents. We also interviewed representatives of these companies, except Motorola and Samsung, which provided written answers to our questions, and Apple, which declined to answer our questions. To address our first objective, we reviewed and analyzed relevant literature to determine the various methods companies use to collect location data, why they use and share these data, the benefits that are provided to the consumer, and the associated privacy risks. In addition, we interviewed representatives from mobile industry associations (CTIA – The Wireless Association and Mobile Marketing Association), privacy advocacy groups (American Civil Liberties Union, Center for Democracy and Technology, Electronic Frontier Foundation, Electronic Privacy Information Center, and Future of Privacy Forum), and two privacy researchers (Christopher Soghoian and Ashkan Soltani) who had either testified on the subject before Congress or authored relevant literature on the subject, to discuss the benefits and privacy risks associated with the use of location data. We also interviewed officials from federal agencies that interact with the mobile industry or have responsibilities for consumer privacy protection, including the Federal Communications Commission (FCC), Federal Trade Commission (FTC), Department of Commerce’s National Telecommunications and Information Administration (NTIA), and Department of Justice (Justice), to obtain their views. To address our second objective, in addition to examining the companies as previously discussed, we identified practices recommended by mobile industry associations and privacy advocacy groups to protect the privacy of and secure users’ personal information and assessed the extent to which they are consistent with the Fair Information Practices. In addition, we reviewed and analyzed the privacy policies of the selected mobile industry companies to determine their specific practices to protect consumer privacy and how their stated practices aligned with recommended practices. We also reviewed relevant studies of mobile application privacy to obtain further information on how mobile application developers protect consumer privacy. We also interviewed representatives of privacy advocacy groups to obtain their views about how the private sector is protecting users’ location privacy. To address our third objective, we identified and reviewed relevant laws applicable to the mobile industry’s use of personal information. To evaluate how federal agencies have ensured compliance with relevant laws and what additional efforts they could take to further protect consumers, we analyzed information and interviewed officials from FCC, FTC, NTIA, and Justice about their enforcement, regulatory, and policymaking efforts to protect consumer privacy. We also interviewed representatives from mobile industry associations and privacy advocacy groups as well as privacy researchers to obtain their views about whether more could be done to protect consumer privacy. In considering ways to address location data privacy issues, we are reporting actions federal agencies could take, rather than potential legislative options. We conducted this performance audit from December 2011 to September 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, Michael Clements (Assistant Director), John de Ferrari (Assistant Director), Russell Burnett, Mark Canter, Marisol Cruz, Colin Fallon, Andrew Huddleston, Josh Ormond, David Plocher, Meredith Raymond, and Crystal Wesco made key contributions to this report. | Smartphones can provide services based on consumers' location, raising potential privacy risks if companies use or share location data without consumers' knowledge. FTC enforces prohibitions against unfair and deceptive practices, and NTIA sets national telecommunications policy. GAO was asked to examine this issue. GAO reviewed (1) how mobile industry companies collect location data, why they share these data, and how this affects consumers; (2) actions private sector entities have taken to protect consumers' privacy and ensure security of location data; and (3) actions federal agencies have taken to protect consumer privacy and what additional federal efforts, if any, are needed. GAO analyzed policies and interviewed representatives of mobile industry companies, reviewed documents and interviewed officials from federal agencies, and interviewed representatives from industry associations and privacy advocates. Using several methods of varying precision, mobile industry companies collect location data and use or share that data to provide users with location-based services, offer improved services, and increase revenue through targeted advertising. Location-based services provide consumers access to applications such as real-time navigation aids, access to free or reduced-cost mobile applications, and faster response from emergency services, among other potential benefits. However, the collection and sharing of location data also pose privacy risks. Specifically, privacy advocates said that consumers: (1) are generally unaware of how their location data are shared with and used by third parties; (2) could be subject to increased surveillance when location data are shared with law enforcement; and (3) could be at higher risk of identity theft or threats to personal safety when companies retain location data for long periods or share data with third parties that do not adequately protect them. Industry associations and privacy advocates have developed recommended practices for companies to protect consumers' privacy while using mobile location data, but companies have not consistently implemented such practices. Recommended practices include clearly disclosing to consumers that a company is collecting location data and how it will use them, as well as identifying third parties that companies share location data with and the reasons for doing so. Companies GAO examined disclosed in their privacy policies that the companies were collecting consumers' location data, but did not clearly state how the companies were using these data or what third parties they may share them with. For example, some companies' policies stated they collected location data and listed uses for personal information, but did not state clearly whether companies considered location to be personal information. Furthermore, although policies stated that companies shared location data with third parties, they were sometimes vague about which types of companies these were and why they were sharing the data. Lacking clear information, consumers faced with making a decision about whether to allow companies to collect, use, and share data on their location would be unable to effectively judge whether the uses of their location data might violate their privacy. Federal agencies have held educational outreach events, developed reports with recommendations aimed at protecting consumer privacy, and developed some guidance on certain aspects of mobile privacy. The Department of Commerce's National Telecommunications and Information Administration (NTIA) is implementing an administration-proposed effort to bring industry, advocacy, and government stakeholders together to develop codes of conduct for industry to address Internet consumer privacy issues generally. However, NTIA has not set specific goals, milestones, and performance measures for this effort. Consequently, it is unclear if or when the process would address mobile location privacy. Furthermore, the Federal Trade Commission (FTC) could enforce adherence to the codes if companies adopted them, but since adoption is voluntary, there is no guarantee companies would adopt the resulting codes. While FTC has issued some guidance to address mobile location privacy issues, it has not issued comprehensive guidance that could inform companies of the Commission's views on the appropriate actions companies should take to protect consumers' mobile location data privacy. GAO recommends that NTIA work with stakeholders to outline specific goals, milestones, and performance measures for its process to develop industry codes of conduct and that FTC consider issuing guidance on mobile companies' appropriate actions to protect location data privacy. Because the agencies had concerns about certain aspects of GAOs draft recommendations, GAO revised them by including that NTIA should work with stakeholders in the process to develop industry codes and removing from the draft FTC recommendation that the guidance should include how FTC will enforce the prohibition against unfair practices. |
The FBI serves as the primary investigative unit of the Department of Justice. The FBI’s mission includes investigating serious federal crimes, protecting the nation from foreign intelligence and terrorist threats, and assisting other law enforcement agencies. Approximately 12,000 special agents and 16,000 analysts and mission support personnel are located in the bureau’s Washington, D.C., headquarters and in more than 70 offices in the United States and in more than 50 offices in foreign countries. Mission responsibilities at the bureau are divided among a number of major organizational components, including: Administration: manages the bureau’s personnel programs, budgetary and financial services, records, information resources, and information security. National Security: integrates investigative and intelligence activities against current and emerging national security threats and provides information and analysis for the national security and law enforcement communities. Criminal Investigations: investigates serious federal crimes and probes federal statutory violations involving exploitation of the Internet and computer systems. Law Enforcement: provides law enforcement information and forensic services to federal, state, local, and international agencies. Office of the Chief Information Officer: develops the bureau’s IT strategic plan and operating budget and develops and maintains technology assets. To execute its mission responsibilities, the FBI relies extensively on the use of information systems. In particular, the bureau operates and maintains hundreds of computerized systems, databases, and applications, such as the Combined DNA Index System, which supports forensic examinations; the National Crime Information Center and the Integrated Automated Fingerprint Identification System, which help state and local law enforcement agencies identify criminals; the Automated Case Management System, which manages information collected on investigative cases; the Investigative Data Warehouse, which aggregates data from disparate databases in a standard format to facilitate content management and data mining; and the Terrorist Screening Database, which consolidates identification information about known or suspected international and domestic terrorists. Following the terrorist attacks in the United States on September 11, 2001, the FBI shifted its mission focus to detecting and preventing future attacks and began to reorganize and transform. According to the bureau, the complexity of this mission shift, along with the changing law enforcement environment, strained its existing IT environment. As a result, the bureau accelerated the IT modernization program that it had begun in September 2000. This program, later named Trilogy, was the FBI’s largest IT initiative to date and consisted of three parts: (1) the Information Presentation Component to upgrade FBI’s computer hardware and system software, (2) the Transportation Network Component to upgrade the agency’s communication network, and (3) the User Application Component to upgrade and consolidate the bureau’s five key investigative software applications. The heart of this last component became the Virtual Case File (VCF) project, which was intended to replace the obsolete Automated Case Support system, FBI’s primary case management application. While the first two components of Trilogy experienced cost overruns and schedule delays, in part because of fundamental changes to requirements, both are currently operating. However, we recently reported that certain information security controls over the Trilogy-related network were ineffective in protecting the confidentiality, integrity, and availability of information and information resources. For instance, we found that FBI did not consistently (1) configure network devices and services securely to prevent unauthorized insider access; (2) identify and authenticate users to prevent unauthorized access; (3) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate; (4) apply strong encryption techniques to protect sensitive data on its networks; (5) log, audit, or monitor security-related events; (6) protect the physical security of its network; and (7) patch key servers and workstations in a timely manner. Taken collectively, we concluded that these weaknesses place sensitive information transmitted on the network at increased risk of unauthorized disclosure or modification and could result in a disruption of service. Accordingly, we recommended that the FBI Director take several steps to fully implement key activities of the bureau’s information security program for the network. These activities include updating assessments and plans to reflect the bureau’s current operating environment, providing more comprehensive coverage of system tests, and correcting security weaknesses in a timely manner. In commenting on this report, the FBI’s Chief Information Officer (CIO) concurred with many of our recommendations, but did not believe that the bureau had placed sensitive information at an unacceptable risk for unauthorized disclosure, modification, or insider threat exploitation. The CIO cited significant strides in reducing risk since the Robert Hanssen espionage investigation. In response, we stated that until weaknesses identified in network devices and services, identification and authentication, authorization, cryptography, audit and monitoring, physical security, and patch management are addressed, increased risk to FBI’s critical network remains. Further, until the bureau fully and effectively implements certain information security program activities for the network, security controls will likely remain inadequate or inconsistently applied. The third component of Trilogy—VCF— never became fully operational. In fact, the FBI terminated the project after Trilogy’s overall costs grew from $380 million to $537 million. VCF fell behind schedule and pilot testing showed that completing it was infeasible and cost prohibitive. Among reasons we and others have cited for VCF’s failure were poorly defined system requirements, ineffective requirements change control, limited contractor oversight, and human capital shortfalls because of, for example, no continuity in certain management positions and a lack of trained staff for key program positions. The Sentinel program began in 2005, and is intended to be both the successor to and an expansion of VCF. In brief, Sentinel is to meet FBI’s pressing need for a modern, automated capability for investigative case management and information sharing to help field agents and intelligence analysts perform their jobs more effectively and efficiently. The program’s key objectives are to (1) successfully implement a system that acts as a single point of entry for all investigative case management and that provides paperless case management and workflow capabilities, (2) facilitate a bureau-wide organizational change management program, and (3) provide intuitive interfaces that feature data relevant to individual users. Using commercially available software and hardware components, Sentinel is to provide a range of investigative case management and workflow capabilities, including leads management and evidence management; document and records management, indexed searching, and electronic links to legacy FBI systems and external data sources; training, statistical, and reporting tools; and security management. The FBI chose to use a governmentwide acquisition contract (GWAC) for Sentinel after conducting a multi-step evaluation of the different GWACs available to federal agencies. In August 2005, the FBI issued a request for vendor proposals to more than 40 eligible companies under a National Institutes of Health (NIH) contracting vehicle. According to the CIO, the request was also provided to more than 500 eligible subcontractors. For the next 8 months, FBI’s Sentinel Source Selection Evaluation Team reviewed and evaluated vendors’ responses to the task order request for proposal to determine which proposal represented the best value. The evaluation team recommended—and FBI ultimately chose—Lockheed Martin as the primary Sentinel contractor. In March 2006, the FBI awarded the task order to develop and integrate Sentinel to Lockheed Martin. The FBI has structured the acquisition of Sentinel into four phases; the completion of each is expected to span about 12 to 18 months. According to FBI officials, the FBI is conducting end user training for Phase 1 and expects to roll out the Phase 1 production in June 2007. The specific content of each phase is to be proposed by and negotiated with the prime contractor. The general content of each phase has these and other capabilities include Phase 1: A Web-based portal that will provide a data access tool for the Automated Case Management System and other legacy systems; a service- oriented architecture definition to support delivery and sharing of common services across the bureau. Phase 2: Case document and records management capabilities, document repositories, improved information assurance, application workflow, and improved data labeling to enhance information sharing. Phase 3: Updated and enhanced system storage and search capabilities. Phase 4: Implementing the remaining components of the new case management system to replace ACS. To manage the acquisition and deployment of Sentinel, the FBI established a program management office within the CIO’s office. The program office is led by a program manager and consists of the eight primary FBI units (see fig. 1). Overall, the FBI estimates that the four phases will cost about $425 million through fiscal year 2011. For fiscal year 2005, the FBI reprogrammed $97 million in appropriated funds from various sources to fund Sentinel work. For fiscal years 2006 and 2007, the FBI said it budgeted about $85 million and $138 million respectively, for Sentinel, of which it reports having obligated about $95 million. For fiscal year 2008, the FBI reports that it has budgeted about $50 million for Sentinel. Acquisition best practices are tried and proven methods, processes, techniques, and activities that organizations define and use to minimize program risks and maximize the chances of program success. Using best practices can result in better outcomes—including cost savings, improved service and product quality, and, ultimately, a better return on investment. For example, two software engineering analyses of nearly 200 systems acquisitions projects indicated that teams using systems acquisition best practices produced cost savings of at least 11 percent over similar projects conducted by teams that did not employ the kind of rigor and discipline embedded in these practices. In addition, our research shows that best practices are a significant factor in successful acquisition outcomes, including increasing the likelihood that programs and projects will be executed within cost and schedule estimates. We and others have identified and promoted the use of a number of best practices associated with acquiring IT systems. In 2004, we reported on 18 relevant best practices and grouped them into two categories: (1) ten practices for acquiring any type of business system and (2) eight complementary practices that relate specifically to acquiring commercial component-based business systems. Examples of these practices relevant to any business systems acquisition include ensuring that (1) reasonable planning for all parts of the acquisition occurs, (2) a clear understanding of system requirements exists, and (3) risks are proactively identified and systematically mitigated. Examples of best practices relevant to commercial component-based systems acquisitions include ensuring that (1) commercial product modification is effectively controlled, (2) relationships among commercial products are understood before acquisition decisions are made, and (3) the organizational impact of using new system functionality is proactively managed. Each of these practices is composed of from one to eight activities and is summarized in table 1 and described in greater detail in appendix II. The FBI has recognized the importance of IT to transformation, making it one of the bureau’s top ten priorities. Consistent with this, the FBI’s strategic plan contains explicit IT-related strategic goals, objectives, and initiatives (near-term and long-term) to support the collection, analysis, processing, and dissemination of information. This recognition is important because, as we previously reported, the bureau’s longstanding approach to managing IT has not always been fully consistent with leading practices. The effects of this can be seen in, for example, the failure of projects such as VCF. To address these issues, the FBI has, as we reported in 2004, centralized IT responsibility and authority under the CIO and the CIO has taken steps to define and implement management capabilities in the areas of enterprise architecture, IT investment management, systems development and acquisition, and IT human capital. Since 2004, the FBI has continued to make progress in establishing key IT management capabilities. As we previously reported, the FBI has created a life cycle management directive that governs all phases and aspects of the bureau’s IT projects, including Sentinel. The directive includes guidance, planned reviews, and control gates for each project milestone, including planning, acquisition, development, testing, and operational management of implemented systems. However, we have also reported that the challenge now for the FBI is to build on these foundational capabilities and implement them effectively on the program and project investments it has under way and planned, including Sentinel. More specifically, we stated that the success of Sentinel will depend on how well the FBI defines and implements its new IT management approaches and capabilities. Among other things, we said that it will be crucial for the FBI to understand and control Sentinel requirements in the context of (1) its enterprise architecture, (2) the capabilities and interoperability of commercially available products, and (3) the bureau’s human capital and financial resource constraints, and to prepare users for the impact of the new system on how they do their jobs. We concluded that not taking these steps will introduce program risks that could lead to problems similar to those that contributed to the failure of the VCF. In this regard, we recently reported on Sentinel’s implementation of IT human capital best practices. We determined that the FBI had moved quickly to staff the Sentinel program office, had created a staffing plan that defined program positions needed for the program, and had filled most of them, primarily with contract staff. However, we also determined that the Sentinel staffing plan addressed only the program office’s immediate staffing needs. It did not provide for the kind of strategic human capital management focus that is essential to success. Exacerbating this situation was that the FBI was not proactively managing Sentinel human capital availability as a program risk. We concluded that, unless the FBI adopted a more strategic approach to managing human capital for the Sentinel program and treated human capital as a program risk, the chances of delivering required intelligence and investigative support capabilities in a timely and cost-effective manner were reduced. Accordingly, we recommended that the FBI adopt such an approach, and the FBI agreed with our recommendations. According to the FBI’s CIO, Sentinel human capital management improvements are being accomplished as part of ongoing Office of the CIO’s human capital management initiatives, which are being pursued in close coordination with ongoing FBI-wide human capital management improvements. The FBI is managing various aspects of Sentinel in accordance with a number of key system acquisition best practices because the FBI CIO and Sentinel program manager have made doing so an area of focus, which reduces Sentinel acquisition risks. At the same time, however, acquisition risks are being increased because support contractors that are performing program management functions are not subject to metrics-based, performance standards. Without such standards, the FBI cannot adequately ensure that support contractors are performing important program management functions effectively and efficiently. The FBI took a number of important steps when soliciting offers from contractors to lead the development of Sentinel and in evaluating the offers and making a contract award decision. We and others have reported on contract solicitation and award best practices used to solicit commercial, component-based IT systems. These practices provide for establishing an organizational framework to conduct a solicitation, including things such as establishing a solicitation policy, defining roles and responsibilities, hiring a qualified solicitation team (including designating responsibility for the selection of a vendor and including contract specialists on the solicitation team). These practices also include guidance on how to evaluate proposals, including things such as: (1) explicitly evaluating systems integration contractors on their ability to implement commercial IT components; (2) specifying the contractual requirements and the proposal’s evaluation criteria in the solicitation package; (3) evaluating the technical and management elements of proposals on the basis of how they satisfy the requirements of the contract; and (4) selecting a contractor that is qualified to satisfy the contract’s requirements. The FBI followed all of these best practices for Sentinel. For instance, the FBI developed a policy for conducting the solicitation—the Sentinel Source Selection Plan—that addressed, among other things, the qualifications for members of the source selection organization. The source selection plan also identified the individual ultimately responsible for conducting the solicitation and making the award decision. With regard to evaluating proposals, the Sentinel solicitation package contained the contractual requirements and evaluation criteria the bureau would use. Those criteria were designed to explicitly evaluate vendors on their ability to integrate commercial IT products and components like those to be used in Sentinel. In addition, FBI evaluated vendor proposals based on both the technical and management elements of their respective proposals, including elements like past performance, proposed technical approach, proposed management approach, plans for mitigating organizational conflict of interest, proposed security approach, and demonstrated prior success in meeting schedule requirements, controlling costs, and program planning. In addition, the FBI used a GWAC, in which vendors’ technical competence had already been already established, thus helping to ensure that the FBI’s selected vendor was qualified. For a summary of the FBI’s implementation of these best practices, see table 2. The FBI has established and is following effective processes for proactively identifying and mitigating program risks before they have a chance to become actual cost, schedule, or performance problems. We and others view risk management as a core acquisition management practice. In brief, risk management is a process for identifying potential acquisition problems and taking appropriate steps to avoid them. It includes identifying risks and categorizing them based on estimated impact, developing and executing risk mitigation strategies, and reporting on progress in doing so. Risk management practices include, among other things: (1) encouraging project-wide participation in the identification and mitigation of risks; (2) defining and implementing a process for the identification, analysis, and mitigation of acquisition risks; (3) examining the status of identified risks in program milestone reviews; (4) establishing a written policy for managing acquisition risk; and (5) designating responsibility for acquisition risk management activities. FBI’s approach for managing Sentinel’s risks employs best practices. (See table 3.) For instance, the Sentinel Risk Management Plan encourages all project team members to identify and mitigate risks, and program officials told us that an e-mail notification system has been implemented in which team members use an e-mail template to forward perceived or newly identified risks to program management. Furthermore, the Risk Management Plan and the prime contractor’s Risk and Opportunity Management Plan establish mechanisms for analyzing and mitigating identified risks. Under these plans, risk review boards (1) solicit input on risks from employees, (2) approve specified risk mitigation plans for these risks and assign the risks to their respective risk registers, and (3) periodically review each risk within the register to monitor the implementation of the mitigation plans. Further, these plans (as well as the bureau’s Life Cycle Management Directive) call for program control gate and milestone reviews that include the status of identified risks, which our analysis of gate and milestone documentation shows includes consideration of risks. This is important because it gives FBI management the opportunity to be apprised of the risks facing the program and what program staff is doing to prevent these risks from occurring when milestone decisions are made. The FBI is beginning to plan for and position itself for the human capital and business process changes that are embedded in the commercial off- the-shelf (COTS) software products that are to be used for Sentinel. Given that the first phase of Sentinel involves minimal new COTS software products and later phases are to be heavily COTS-based, the timing of this planning and positioning is appropriate. As we have previously reported, acquiring software-intensive systems that leverage commercial components involves acquisition management best practices beyond those associated with custom, one-of-a-kind software development efforts. One category of best practices related to COTS acquisitions is proactively planning for and positioning the organization for the people and process changes that will occur as a result of adopting the functionality embedded in commercial products. In short, such change occurs because COTS products are created based on a set of requirements that will have marketability to a broad customer base, rather than to a single customer, which in this case is the FBI. While such products are configurable to align with the customer’s architectural needs, such as business rules and date standards, the standard core functionality in the products will require the implementing organization to adopt the product’s embedded business processes, which in turn will require changes to the roles and responsibilities of the organization’s workforce and the policies and procedures that they follow. To ensure that the organizational impact of implementing a COTS-based system is effectively managed, best practices advocate that (1) project plans explicitly provide for preparing users for the impact that the business processes embedded in the commercial components will have on their roles and responsibilities and (2) the organization actively manages the introduction and adoption of changes to how users will be expected to execute their jobs. As noted earlier, Phase I of Sentinel does not involve extensive use of COTS. Rather, Phase I largely involves development of a customized Web- based portal to the FBI’s legacy case management system. Thus, the need for the FBI to have already planned for and be positioned to introduce significant Sentinel-induced organizational change is not expected to be as critical as in later phases. According to the Sentinel program manager, the impact on users in Phase 1 will be minimal due to the small scope of changes that users will need to deal with. Phase 2, in comparison, will introduce changes to individual users’ roles, responsibilities, and business practices resulting from re-engineered business processes and a range of COTS-based system capabilities. This means that most of the organizational change management activities for Sentinel are planned for such later phases. Recognizing the relevance of organizational change management to post- Phase 1 efforts, the FBI has taken steps consistent with both of these previously-cited best practices. (See table 4.) With respect to planning, the Sentinel Program Management Plan identifies the need to work closely with users to ensure that they understand Sentinel capabilities, and the Sentinel Communication Plan outlines a strategy to assist FBI personnel in understanding the purpose and scope of Sentinel and its implications. Among other things, this plan provides for tracking user acceptance, including metrics to continually gauge acceptance and the effectiveness of the strategy. In addition, the Sentinel Training and Strategy Plan provides for analyzing workforce impacts and addressing changes to individuals’ roles and responsibilities. Regarding actively managing the introduction of changes to how individuals execute their jobs, the FBI has set in motion five areas of activity that are embodied in the previously-mentioned plans. These activities are stakeholder management, organizational impact assessment and understanding, communication, training, and performance support. More specifically, the prime contractor has conducted a Sentinel Stakeholder and Organizational Risk Assessment based in part on visiting several FBI field offices and conducting focus groups with prospective Sentinel users to assess risks to users’ acceptance of Sentinel. The results of this analysis have been incorporated into their communication and training plans and are to be addressed through things such as user manuals and program documentation. For instance, the risk and impact analysis showed that on-screen navigation through Sentinel was an area of user concern, so the training plan has treated this as an area of emphasis. According to program officials, such areas of focus are intended to proactively engage and manage stakeholders through the change process, with the ultimate goal of having Sentinel become “business as usual.” The challenge that the FBI faces as it proceeds with future Sentinel phases is to ensure that the five areas of activity, particularly the communication and training plans, are effectively implemented. The FBI has put in place controls and tools for systematically identifying Sentinel’s component parts (software, hardware, and documentation) and controlling the configuration of these parts in a way that reasonably ensures the integrity of each and it has effectively implemented most of those controls. However, FBI has not fully implemented one of the key practices. As a result, it is unclear whether the support contractor that is responsible for this practice is in fact executing it in an effective and efficient manner. Configuration management is an essential ingredient in successful IT systems programs such as Sentinel. The purpose of configuration management is to maintain integrity and traceability and to control modifications or changes to program assets like technology products and program documentation throughout their life cycles. Effective configuration management, among other things, enables integration and alignment among related program assets. As we have previously reported, an effective configuration management program comprises four primary elements, each of which should be described in a configuration management plan and implemented according to the plan. The four elements of an effective configuration management program are: Configuration identification: Identifying, documenting, and assigning unique identifiers (e.g., serial number and name) to program assets, generally referred to as configuration items. Configuration control: Evaluating and deciding whether to approve changes to a product’s baseline configuration, generally accomplished through configuration control boards, which evaluate proposed changes on the basis of costs, benefits, and risks, and decide whether to permit a change. Configuration status accounting: Documenting and reporting on the status of configuration items as a product evolves. Documentation, such as historical change lists, are generated and kept in a library, thereby allowing organizations to be continuously aware of the state of a product’s configuration and thus to be in a position to make informed decisions about changing the configuration. Configuration auditing: Determining alignment between the actual product and the documentation describing it, thereby ensuring that the documentation used to support the configuration control board’s decision making is consistent with the actual system products that reflect these decisions. Configuration audits, both functional and physical, are performed when a significant product change is introduced and they help to ensure that only authorized changes are being made. The FBI developed the Sentinel Configuration Management Plan to govern the assets that both the FBI and prime contractor develop. This plan reflects the bureau’s Life Cycle Management Directive and each of the previously-cited best practices. Moreover, the FBI has largely implemented its plan, as described here and summarized in table 5. With respect to configuration identification, the plan defines which classes of program assets are under configuration control and specifies how program staff is to (a) determine the program’s configuration items and (b) assign each a unique identifier. In this regard, we observed the naming conventions the program office created for identifying and uniquely naming program assets and then verified that the FBI had inventoried items in accordance with these conventions. In addition, we observed that the FBI had placed under configuration control all of its relevant program documentation, as well as all the data item deliverables from the prime contractor, including multiple software components. Regarding configuration control, the FBI’s plan calls for, and its prime contractor has implemented, a commercially available software tool to store and manage the program’s configuration items, including such things as baselined planning documents and hardware and software assets. Among other things, we observed that the tool features a series of access controls that permit only authorized changes to program assets. For example, the tool did not allow unauthorized changes to configuration items. The FBI and the prime contractor have established configuration control boards, engineering review boards, and software change control boards as specified in the plan to establish a baselined configuration for Sentinel’s assets and to authorize changes to them. These boards work together (see fig. 2) to review suggested changes to configuration items on the basis of potential impacts on the rest of the system, including risk, cost, and schedule implications. If these boards approve a change, it is executed by the contractor and recorded in the tool. If a change is rejected by one of the review boards, it is dropped and that decision is also recorded along with the board’s rationale. Concerning configuration status accounting, the FBI’s plan outlines procedures that are consistent with best practices and FBI’s Life Cycle Management Directive. These procedures include keeping historical change lists and producing monthly configuration status accounting reports. However, according to FBI officials, the FBI is not producing regular reports as called for in its plan because the configuration management tool that the FBI is using has the ability to produce the same kinds of reports on demand. Such “real time” reporting satisfies the intent of this best practice. With respect to configuration auditing, the FBI’s plan calls for audits of the status of program assets. However, the bureau is not following its plans because, according to program officials, the configuration management tool’s embedded controls and processes reduce the need for such audits. One such control that we observed includes the automatic recording of who made a change to the software or hardware asset and when the change was made. Nevertheless, the FBI has tasked one of its support contractors with checking the status of configuration items on a daily basis to augment the tool controls. According to the contractor’s representative performing this check, the boards’ configuration-related decisions are compared with the configuration status reflected in the tool; deviations are to be reported to program management. This approach, according to bureau officials, constitutes “real time” auditing and is better than the periodic audits cited in the Configuration Management Plan. However, this contractor’s activities are not documented or otherwise governed by explicit performance criteria. As a result, the results of configuration audits were not available to assess possible configuration management and security impacts, as provided for in the Sentinel Configuration Management Plan. Thus, we could not verify the FBI’s implementation of configuration auditing activities. This lack of performance criteria and measures for support contractors are described further in the next section. FBI officials stated that they intended to perform configuration audits called for in their plan in early June. The FBI is performing a range of activities to effectively define expectations for its prime contractor and to measure performance against and hold the contractor accountable for meeting these expectations. The bureau is also performing a number of key practices that are relevant to tracking and overseeing the many support contractors that are performing program management functions. However, it is not performing one key practice—establishing and employing product and service performance standards. As a result, the FBI cannot adequately ensure that these support contractors are performing required program management functions effectively and efficiently. Contract tracking and oversight is the process by which contractual agreements are established and contractor efforts to satisfy those agreements are monitored. This process involves information sharing between the acquirer and contractor to ensure that contractual requirements are understood, that there are measurements to disclose overall project status and problems, and that there are appropriate incentives for ensuring that cost and schedule commitments are met and that quality products are delivered. Contract tracking and oversight begins with the award of a contract and ends at the conclusion of the contract’s period of performance. Contract tracking and oversight best practices include ensuring that (1) the acquiring organization has sufficient insight into the contractor’s activities to manage and control the contractor and ensure that the contract’s requirements are met; (2) the acquiring organization and contractor maintain ongoing communication and both parties implement agreed-to commitments; (3) all contract changes are managed throughout the life of the contract; (4) the acquisition organization has a written policy for contract tracking and oversight; (5) responsibility for contract tracking and oversight activities is designated; (6) the acquiring organization involves contracting specialists in the execution of the contract; (7) a quantitative set of software and system metrics is used to define and measure product quality and contractor performance; and (8) incentives for meeting cost and schedule estimates and measurable, metric-based product quality incentives are explicitly cited in the contract. The FBI has taken a number of actions to satisfy these best practices with respect to the Sentinel prime contractor; however, the bureau has not done the same in tracking and overseeing the many support contractors that are performing program management functions. Three examples of best practices implemented in relation to the prime contractor are described here. (See app. III for information on the implementation of all eight practices.) To ensure sufficient insight into the contractor’s activities, the bureau has instituted integrated product teams for Sentinel, whereby members of the program management office work side by side with the prime contractor. As a result, the Sentinel program office has had daily insight into the direction of the contractor’s work, thereby giving the FBI management regular opportunities to manage and control the contractor’s activities. Moreover, the FBI requires that the prime contractor provide a monthly report detailing the contractor’s activities during the previous month, as well as its anticipated activities for the next month, to permit further insight into the contractor’s activities. In addition, the bureau has also established weekly meetings with its contractors to review accomplishments, ongoing issues, and program risks. Concerning managing changes to the contract throughout its lifetime, the program office has implemented a change control process consisting of several review boards to manage changes to program assets. According to program officials, board decisions that significantly change requirements (e.g., deliverables) are handled through contract letters. These letters serve as an official record of the FBI’s direction to the contractor, including changes to deliverables called for in the statement of work. Regarding having a written policy for contract tracking and oversight, the FBI’s Life Cycle Management Directive established the bureau’s policy for tracking and overseeing contractors on all IT programs, including Sentinel. In addition, the Sentinel Program Management Plan provides additional procedures, including conducting reviews such as the Requirements Clarification review, the Design Concept Review, and the Preliminary Design Review. Further, the Sentinel statement of work contains requirements for the contractor’s earned value management system, earned value baseline, and the contractor’s monthly earned value status reports. With respect to support contractor tracking and oversight, the bureau is at least partially satisfying all but one of these relevant best practices. (See app. III for information on implementation of all eight practices.) However, it has not, for example, established and applied measurable performance standards in its support contractors’ statements of work. Specifically, while these statements of work identify specific tasks to be accomplished and assign responsibility for overseeing their execution, they do not cite associated quality and timeliness standards for contract deliverables or other such performance measures. As noted earlier, for example, the activities performed by the configuration management support contractor (see prior section) are not governed by written procedures and are not subject to explicit performance standards. Program officials stated that they manage support contractors daily through face-to-face interaction, and that all work products provided by support contractors are reviewed and approved by government supervisors. Thus, they added, explicit performance standards are not needed. Given the bureau’s reliance on support contractors, however, maximizing their performance is important to Sentinel’s overall success. By not ensuring that statements of work spell out measures governing product and service quality and timeliness, the FBI cannot adequately ensure that these contractors are performing important program management functions effectively and efficiently. The FBI’s policies and procedures that form the basis for Sentinel’s schedule and cost estimates are not fully consistent with reliable estimating practices. While the FBI has issued an IT program management handbook, related guidance, and tools that define how IT program schedules and costs are to be estimated, this handbook and related material do not, for example, address having a historical database of program schedule and cost estimates to inform future estimates. In addition, this handbook and related material do not adequately address such schedule estimating practices as providing float time between key activities and reserve time for high risk activities, and they do not adequately address such cost estimating best practices as documentation of source information. The cost estimates that the FBI has developed for Sentinel reflect these limitations in policies, procedures, and tools. In particular, the estimates to date did not include all relevant costs and could not be verified by supporting documentation. Without well-defined policies, procedures, and supporting tools for estimating IT programs’ schedules and costs, the reliability of these programs’ respective estimates is questionable and, in the case of Sentinel, a key basis of informed investment management is missing. The success of any program depends in part on having a reliable schedule of when the program’s set of work activities will occur, how long they will take, and how they are related to one another. As such, the schedule not only provides a road map for systematic execution of a program, but also provides the means by which to gauge progress, identify and address potential problems, and promote accountability. Among other things, best practices and related federal guidance call for a program schedule to be program-wide in scope, meaning that it should include the integrated breakdown of the work to be performed by both the government and its contractors over the expected life of the program. Best practices also call for the schedule to expressly identify and define the relationships and dependencies among work elements and the constraints affecting the start and completion of work elements. A well-defined schedule helps to identify the amount of human capital and fiscal resources that are needed to execute the program, and thus is an important contribution to a reliable cost estimate. Our research has identified a range of best practices associated with effective schedule estimating. These practices include Capturing key activities: The schedule should reflect all key activities (steps, events, outcomes, etc.) as defined in the program’s work breakdown structure, to include activities to be performed by both the government and its contractors. Sequencing key activities: The schedule should line up key activities in the order that they are to be carried out. In particular, activities that must finish prior to the start of other activities (i.e., predecessor activities) as well as activities that cannot begin until other activities are completed (i.e., successor activities) should be identified. By doing so, dependencies among activities that collectively lead to the accomplishment of events or milestones can be established and used as a basis for guiding work and measuring progress. Establishing the duration of key activities: The schedule should reflect how long each activity will take to execute. In determining the duration of each activity, the same rationale, data, and assumptions used for cost estimating should be used for schedule estimating. Further, these durations should be as short as possible and they should have specific start and end dates. Excessively long periods needed to execute an activity should prompt further decomposition of the activity so that shorter execution durations will result. Assigning resources to key activities: The schedule should reflect who will do the work activities, whether all required resources will be available when they are needed, and whether any funding or time constraints exist. Establishing the critical path for key activities: The schedule should identify the longest duration path through the sequenced list of key activities, which is known as the schedule’s critical path. If any activity slips along this path, the entire program will be delayed. Therefore, potential problems that might occur along or near the critical path should be identified and reflected in the scheduling of the time for high risk activities (see next). Identifying “float time” between key activities: The schedule should identify the time that a predecessor activity can slip before the delay affects successor activities, which is known as “float time” and is an indicator of schedule flexibility. As a general rule, activities along the critical path typically have the least amount of float time. Distributing reserves to high risk activities: The baseline schedule should include a buffer or a reserve of extra time. Typically, the schedule reserve is calculated by taking the difference in time between the planned completion date and the contractual completion date for either the program as a whole or for a part of the program. As a general rule, the reserve should be applied to high risk activities, which are typically found along the critical path. Integrating key activities horizontally and vertically: The schedule should be horizontally integrated, meaning that it should link the products and outcomes associated with already sequenced activities (see previous section). These links are commonly referred to as “hand offs” and serve to verify that activities are arranged in the right order to achieve aggregated products or outcomes. The schedule should also be vertically integrated, meaning that traceability exists among varying levels of activities and supporting tasks and sub-tasks. Such mapping or alignment among levels enables different groups to work to the same master schedule. The FBI’s policies and procedures that govern IT program schedule estimating are defined in the bureau’s IT Program Management Handbook and its IT Investment Management Process. To the bureau’s credit, these documents reflect several of the previously cited best practices for schedule estimating. For example, the handbook requires program managers to define and sequence the key activities required to complete a given project, to determine the durations of each activity, and to identify the resources needed to complete tasks. Further, the handbook calls for the identification of the project’s critical path and “float time.” However, the handbook and associated worksheets do not specifically call for the distribution of schedule reserve to high risk activities or for the integration of tasks horizontally and vertically. Moreover, FBI policies and procedures only partially provide for assigning resources to key activities because the FBI’s guidance does not address consideration of whether funding or time constraints exist. (See table 7 for a summary of the extent to which FBI policies and procedures address each of the best practices.) FBI Office of the CIO officials agreed that these best practices are not addressed in current bureau policies for estimating schedules and that they need to be. Until they are, schedule estimates for FBI IT programs, like Sentinel, will be of questionable reliability, and thus the risk of Sentinel’s actual performance not tracking closely to plans is increased. The delays that the FBI has recently experienced on Phase I of Sentinel illustrate how this risk may have been realized. Specifically, the original milestone for completing deployment of Sentinel Phase I to all headquarters and field offices was May 2007. However, according to bureau officials, this milestone slipped to June 2007. According to program officials, the delay is due to a number of factors, including early miscommunication with the prime contractor on when work on the program was to begin, a number of changes within the prime contractor’s staff, and problems integrating commercial products that were not discovered until system acceptance testing. However, the limitations in the FBI’s policies and procedures that are the basis for the Sentinel schedule could have led to development of a Phase I schedule that was not sufficiently reliable, and thus was a contributor to this schedule slip. A reliable cost estimate is critical to the success of any IT program. Such an estimate provides the basis for informed investment decision making, realistic budget formulation and program resourcing, meaningful progress measurement, proactive course correction when warranted, and accountability for results. According to OMB, programs must maintain current and well-documented estimates of program costs, and these estimates must encompass the full life cycle of the program. Among other things, OMB states that generating reliable program cost estimates is a critical function necessary to support OMB’s capital programming process. Without this capability, agencies are at risk of experiencing program cost overruns, missed deadlines, and performance shortfalls. Our research has identified a number of best practices that are the basis of effective program cost estimating. We have grouped these practices into four characteristics of a high-quality and reliable cost estimate. They are Comprehensive: The cost estimates should include both government and contractor costs of the program over its full life cycle, from inception of the program through design, development, deployment, and operation and maintenance to retirement of the program. They should also provide a level of detail appropriate to ensure that cost elements are neither omitted nor double counted, and they should document all cost-influencing ground rules and assumptions. Well-documented: The cost estimates should capture in writing such things as the source data used and their significance, the calculations performed and their results, and the rationale for choosing a particular estimating method or reference. Moreover, this information should be captured in such a way that the data used to derive the estimate can be traced back to, and verified against their sources. Accurate: The cost estimates should provide for results that are unbiased, and they should not be overly conservative or optimistic (i.e., should represent most likely costs). In addition, the estimates should be updated regularly to reflect material changes in the program, and steps should be taken to minimize mathematical mistakes and their significance. Among other things, the estimate should be grounded in documented assumptions and a historical record of cost and schedule estimating and actual experiences on other comparable programs. Credible: The cost estimates should discuss any limitations in the analysis performed due to uncertainty or biases surrounding data or assumptions, and their derivation should provide for varying major assumptions and recalculating outcomes based on sensitivity analyses, and the associated risk and uncertainty inherent in estimates should be disclosed. Further, the estimates should be verified based on cross- checks using other methods and by comparing the results with independent cost estimates. The FBI’s policies and procedures that govern estimating program costs are defined in the bureau’s IT Program Management Handbook, Cost- Benefit Analysis Guide, and IT Investment Management Process. To the bureau’s credit, these documents reflect some of the previously cited best practices. For example, the handbook calls for cost estimates to be comprehensive and to be life cycle in scope, including total costs (e.g., research, development, production, training, operations and maintenance, software licensing, and labor) over its full life cycle (from initiation to system retirement). Moreover, FBI guidance partially provides for documenting these estimates and ensuring their accuracy by, for example, stating that estimating assumptions should be documented and that the estimates are to be updated on a regular basis. However, these policies and procedures do not reflect all of the cost estimating best practices associated with well-documented, accurate, and credible estimates. With respect to being well-documented, they do not require that the sources of historical data used in the estimate be documented and, with respect to accuracy, they do not provide for the establishment and use of a historical database of estimating and actual experiences on comparable programs. Without documenting estimated data sources, the basis for the estimates, including the circumstances surrounding the data used to derive and whether these data have been properly normalized, cannot be understood. This means that the reliability of the estimate for either current use in managing a program or for inclusion in a historical database for use in future program estimates, cannot be assured. Further, without provision for establishing and using a historical database, one will not be available to inform future estimates, as is the case for the FBI. With respect to credibility, the FBI’s policies and procedures do not address the need to consider and reflect any limitations in the analyses on which the estimates are based, or to document any uncertainty or biases surrounding the data used. As a result, the associated uncertainty in the estimate itself cannot be determined, thus limiting the estimate’s integrity and utility. Further, the FBI’s policies and procedures do not provide for the conduct of risk/sensitivity analyses and disclosure of the associated risk and uncertainty of the estimates. Thus, estimates will not include important information to inform program decision making, such as the range of potential costs surrounding the point estimate and the reasons behind this range. FBI Office of the CIO officials agreed that these practices are not included in the bureau’s policies and procedures that form the basis for IT program cost estimates and that they need to be. Until an effective basis for cost estimating is in place and employed, FBI IT programs, like Sentinel, will likely not have reliable cost estimates to properly inform investment decision making and the risk of actual program cost performance not tracking closely to estimates will be increased. Our analysis of Sentinel cost estimates revealed reliability issues. In particular, none of the estimates are comprehensive in that they each omit relevant costs. For example, one estimate does not include government or support contractor costs and, according to program officials, another estimate does not include technology refresh, certain government labor costs, or inflationary costs. In addition, these estimates cannot be considered fully accurate or well documented. For example, according to program officials, none of the estimates was derived using a historical database reflecting actual and estimated costs on similar programs. Further, none of the estimates had a fully documented estimating methodology, although some parts of one cost estimate were documented. Also, none of the estimates could be traced to the source of the data that were used in developing them. These reliability concerns with the Sentinel cost estimates are due in part to the FBI’s not following its own cost estimating policies and procedures and in part to the previously mentioned limitations in the FBI’s cost estimating policies, procedures, and supporting tools. As a result, the Sentinel cost estimates do not provide a sufficient basis for informed investment decision making and do not facilitate meaningful tracking of progress against estimates, both of which are fundamental to effectively managing an IT program. The success of large-scale IT programs, such as Sentinel, is to a large part determined by the extent to which they are executed according to rigorous and disciplined system acquisition management best practices. While implementing such practices does not guarantee program success, doing so will minimize the program’s exposure to risk and thus the extent to which the program falls short of expectations. In the case of Sentinel, living up to expectations is critical because not only are the capabilities that Sentinel is to provide mission critical, they are long overdue and thus time is of the essence. To the FBI’s credit, it has implemented a number of best practices for Sentinel and by doing so has placed itself on a path to both avoid the kind of missteps that led to the failure of VCF and increase the chances of putting needed mission capabilities in the hands of bureau agents and analysts as soon as possible. Nevertheless, the FBI is still not where it needs to be in managing its program office support contracts and in having reliable estimates of Sentinel schedules and costs to manage and disclose progress and to inform bureau, Department of Justice, and congressional investment decision making. As a result, there is a risk that contractor- provided program management support will not be performed effectively and efficiently. Given that Sentinel’s program office relies extensively on such contractor support to execute its many program management functions, less than high quality support contractor performance could adversely affect Sentinel’s success. Risks also exist relative to having a reliable basis for informed decisions about Sentinel’s investment options because bureau policies, procedures, and tools that form the basis for Sentinel schedule and cost estimates do not reflect important best practices. Moreover, the cost estimates that the FBI has developed to date for Sentinel reflect these limitations. This means that bureau and Justice leadership and Congress lack reasonable assurance that they have a reliable cost basis on which to decide among competing investment options and measure Sentinel’s progress. Both effective support contractor tracking and oversight and reliable schedule and cost estimating are critical management functions that should be implemented for programs like Sentinel according to organizational policies and procedures that are grounded in relevant best practices. The FBI’s current policies and procedures in this area do not address several key best practices, and hence the bureau has not implemented such practices for Sentinel. It is important that the FBI correct this void in its policies and procedures and that all its IT programs implement these practices. To strengthen the FBI’s management of its Sentinel program, we are recommending that the FBI Director instruct the bureau’s CIO to work with Sentinel support contractors, where feasible, to establish and implement performance standards in statements of work relative to the quality and timeliness of products and the performance of services and revise the IT handbook and related guidance to address schedule and cost estimating best practices that are identified in this report as not being addressed in FBI policies and procedures and ensure that these best practices are fully employed on all major IT programs, including Sentinel. In written comments on a draft of this report signed by the FBI CIO and reprinted in appendix IV, the bureau stated that it agreed with our recommendation to revise and implement its guidance for IT program schedule and cost estimation. The FBI CIO stated that the bureau plans to do so by September 30, 2007. However, the FBI disagreed with our recommendation to establish and implement metrics-based performance standards for its Sentinel program office support contractors, stating that the program office already provides sufficient oversight of these contractors. To support this position, the FBI commented that Sentinel’s staffing plan contains support contractor position descriptions that include identifying the skills required to execute each position’s functions. Further, it commented that all support contractor’s products are reviewed and approved by government supervisors, and that the support contractors submit reports on accomplishments that are used by the FBI in reviewing and approving invoices. While we do not take issue with any of these comments, we also do not believe that these actions fully address our recommendation. As a result, we disagree with the bureau’s position. Specifically, our point is not whether the functions that support contractors perform, or the skills needed to perform them, are identified or whether support contractors’ work is reviewed before invoice payment is approved; rather, our point is that standards governing the quality and timeliness of the functions and work performed are not defined; this lack of standards, in turn, increases the chances of support contractors producing products or providing services that fall short of expectations and thus do not support effective and efficient program management. As we state in our report, this is particularly important for the Sentinel program because the bureau is relying extensively on support contractors to augment its own program management staff. The FBI also provided technical comments, which we have incorporated throughout the report as appropriate. We are sending copies of this report to the Chairman and Vice Chairman of the Senate Select Committee on Intelligence and the Ranking Member of the House Permanent Select Committee on Intelligence as well as to the Chairman and Ranking Member of the Senate Committee on the Judiciary; the Chairman and Ranking Member of the House Committee on Appropriations, Subcommittee on Science; the departments of State, Justice, and Commerce, and related agencies. We are also sending copies to the Attorney General; the Director, FBI; the Director, Office of Management and Budget; and other interested parties. In addition, the report will also be available without charge on GAO’s Web site at http://www.gao.gov. Should you have any questions about matters discussed in this report, please contact me at (202) 512-3439 or by e-mail at hiter@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs Office may be found on the last page of this report. Key contributors to this report are listed in appendix V. Our objectives were to determine the FBI’s (1) use of effective practices for acquiring Sentinel and (2) basis for reliably estimating Sentinel’s schedule and costs. To address the first objective, we focused on five key areas associated with acquiring commercial component-based IT systems, as agreed with our requestors: solicitation, risk management, organizational change management, configuration management, and contract tracking and oversight. We researched relevant best practices, including published guidance from the Software Engineering Institute (SEI) and GAO-issued reports associated with each of these five areas. We also obtained and reviewed relevant FBI policies, procedures, guidance, and Sentinel program documentation (see below), and interviewed pertinent Sentinel program and Office of the CIO officials as well as prime contractor (Lockheed Martin) and support contractor representatives where appropriate, to determine how the FBI had defined its approach to managing each of these five areas and how it had actually implemented them on Sentinel. We then compared this body of evidence to best practices and related guidance that we researched, identified variances, and discussed the reasons for and impact of any variances with FBI officials. The key, governing FBI documents that we obtained and reviewed relative to each of the five areas included (1) FBI Information Technology Life Cycle Management Directive version 3.0; (2) Project Management Handbook, version 1; and (3) Sentinel Program Management Plan, version 1.2. In addition, we obtained and reviewed the following documents that were specific to each of the five areas: For solicitation, these documents include: (1) the Sentinel Source Selection Plan; (2) the Sentinel Source Selection Decision document; and (3) the Sentinel Source Selection Evaluation Team Final Report. For risk management, these documents include: (1) the Sentinel Risk Management Plan; (2) the Sentinel Risk Register; and (3) the Lockheed Martin Risk and Opportunity Management Plan for Sentinel. For organizational change management, these documents include: (1) the Sentinel Workforce Transformation Strategy and Plan; (2) the Sentinel Stakeholder and Organizational Risk Assessment; (3) the Sentinel Organizational Impact Assessment; (4) the Sentinel Communications Plan; and (5) the Sentinel Training Strategy and Plan. For configuration management, these documents include: (1) the Sentinel Configuration Management Plan; and (2) the Lockheed Martin Configuration Management Plan for Sentinel. For contract tracking and oversight these documents include (1) the statements of work for Sentinel support contractors; (2) the Sentinel Measurement Plan; (3) selected Sentinel Measurement reports; (4) the Sentinel Statement of Work; and (5) select monthly EVM reports. To address our second objective, we used GAO’s draft guide on estimating program schedules and costs, which is based on extensive research of best practices, and federal schedule and cost estimating guidance found in the OMB Capital Programming Guide. In addition, we obtained and reviewed FBI policies and procedures governing schedule and cost estimating, including the FBI Program Management Handbook, FBI Information Technology Life Cycle Management Directive, and the FBI Information Technology Management Guide. We then compared the bureau’s policies and procedures to the practices in GAO’s guide and to federal guidance, identified variances, and discussed reasons for variances with officials from the Office of the CIO. We also interviewed program officials, and/or obtained and reviewed Sentinel cost estimates relative to the analysis, data, and calculations supporting each estimate. We conducted our work from our Washington, D.C., headquarters, and at FBI headquarters and facilities in the greater Washington, D.C., metropolitan area between September 2005 and May 2007 in accordance with generally accepted government auditing standards. We and others have identified and promoted the use of a number of best practices associated with acquiring IT systems. In 2004, we reported on 18 relevant best practices and grouped them into two categories: (1) ten practices for acquiring any type of business system and (2) eight complementary practices that relate specifically to acquiring commercial component-based business systems. Each is described here. Purpose: To ensure that reasonable planning for all parts of the acquisition is conducted. Description: Acquisition planning is the process for conducting and documenting acquisition planning activities beginning early and covering all parts of the project. This planning extends to all acquisition areas, such as budgeting, scheduling, resource estimating, risk identification, and requirements definition as well as the overall acquisition strategy. Acquisition planning begins with the earliest identification of a requirement that is to be satisfied through an acquisition. Activities: (1) Plans are prepared during acquisition planning and maintained throughout the acquisition. (2) Planning addresses the entire acquisition process, including life cycle support of the products being acquired. (3) The acquisition organization has a written policy for planning the acquisition. (4) Responsibility for acquisition planning activities is designated. Purpose: To ensure that the acquisition is consistent with the organization’s enterprise architecture. Description: Architectural alignment is the process for analyzing and verifying that the proposed architecture of the system being acquired is consistent with the enterprise architecture for the organization acquiring the system. Such alignment is needed to ensure that acquired systems can interoperate and are not unnecessarily duplicative of one another. Exceptions to this alignment requirement are permitted, but only when justified and only when granted an explicit waiver from the architecture. A particular architectural consideration is whether requirements that extend beyond the specific system being acquired should be considered when selecting system components. Such “product line” (i.e., systems that are developed from a common set of assets and share a common and managed set of features) considerations can provide substantial production economies over acquiring systems from scratch. Activities: (1) The system being acquired is assessed for alignment with the enterprise architecture at key life cycle decision points and any deviations from the architecture are explicitly understood and justified by an explicit waiver to the architecture. (2) Product line requirements— rather than just the requirements for the system being acquired—are an explicit consideration in each acquisition. Purpose: To ensure that contract activities are performed in accordance with contractual requirements. Description: Contract tracking and oversight is the process by which contractual agreements are established and contractor efforts to satisfy those agreements are monitored. It involves information sharing between the acquirer and contractor to ensure that contractual requirements are understood, that there are regular measurements to disclose overall project status and whether problems exist, and that there are appropriate incentives for ensuring that cost and schedule commitments are met and that quality products are delivered. Contract tracking and oversight begins with the award of the contract and ends at the conclusion of the contract’s period of performance. Activities: (1) The acquiring organization has sufficient insight into the contractor’s activities to manage and control the contractor and ensure that contract requirements are met. (2) The acquiring organization and contractor maintain ongoing communication; commitments are agreed to and implemented by both parties. (3) All contract changes are managed throughout the life of the contract. (4) The acquiring organization has a written policy for contract tracking and oversight. (5) Responsibility for contract tracking and oversight activities is designated. (6) The acquiring organization involves contracting specialists in the execution of the contract. (7) A quantitative set of software and system metrics is used to define and measure product quality and contractor performance. (8) In addition to incentives for meeting cost and schedule estimates, measurable, metrics-based product quality incentives are explicitly cited in the contract. Purpose: To ensure that system investments have an adequate economic justification. Description: Economic justification is the process for ensuring that acquisition decisions are based on reliable analyses of the proposed investment’s likely costs versus benefits over its useful life as well as an analysis of the risks associated with actually realizing the acquisition’s forecasted benefits for its estimated costs. Moreover, it entails minimizing the risk and uncertainty of large acquisitions that require spending large sums of money over many years by breaking the acquisition into smaller, incremental acquisitions. Economic justification is not a one-time event, but rather is performed throughout an acquisition’s life cycle in order to permit informed investment decision making. Activities: (1) System investment decisions are made on the basis of reliable analyses of estimated system life cycle costs, expected benefits, and anticipated risks. (2) Large systems acquisitions are (to the maximum extent practical) divided into a series of smaller, incremental acquisition efforts, and investment decisions on these smaller efforts are made on the basis of reliable analyses of estimated costs, expected benefits, and anticipated risks. Purpose: To ensure that evidence showing that the contract products satisfy the defined requirements are provided prior to accepting contractor products. Description: Evaluation is the process by which contract deliverables are analyzed to determine whether they meet contract requirements. It includes developing criteria such as product acceptance criteria to be included into both the solicitation package and the contract. It should be conducted continuously throughout the contract period as products are delivered. It begins with development of the products’ requirements and ends when the acquisition is completed. Activities: (1) Evaluation requirements are developed in conjunction with the contractual requirements and are maintained over the life of the acquisition. (2) Evaluations are planned and conducted throughout the total acquisition period to provide an integrated approach that satisfies evaluation requirements and takes advantage of all evaluation results. (3) Evaluations provide an objective basis to support the product acceptance decision. (4) The acquisition organization has a written policy for managing the evaluation of the acquired products. (5) Responsibility for evaluation activities is designated. Purpose: To ensure that the project office and its supporting organizations function efficiently and effectively. Description: Project management is the process for planning, organizing, staffing, directing, and managing all project-office-related activities, such as defining project tasks, estimating and securing resources, scheduling activities and tasks, training, and accepting products. Project management begins when the project office is formed and ends when the acquisition is completed. Activities: (1) Project management activities are planned, organized, controlled, and communicated. (2) The performance, cost, and schedule of the acquisition are continually measured, compared with planned objectives, and controlled. (3) Problems discovered during the acquisition are managed and controlled. (4) The acquisition organization has a written policy for project management. (5) Responsibility for project management is designated. Purpose: To ensure that contractual requirements are clearly defined and understood by the acquisition stakeholders. Description: Requirements development is the process for developing and documenting contractual requirements, including evaluating opportunities for reusing existing assets. It involves participation from end users to ensure that product requirements are well understood, and that optional versus mandatory requirements are clearly delineated. Requirements management is the process for establishing and maintaining agreement on the contractual requirements among the various stakeholders and for ensuring that the requirements are traceable, verifiable, and controlled. This involves base lining the requirements and controlling subsequent requirements changes. Requirements development and management begins when the solicitation’s requirements are documented and ends when system responsibility is transferred to the support organization. Activities: (1) Contractual requirements are developed, managed, and maintained. (2) The end user and other affected groups have input into the contractual requirements over the life of the acquisition. (3) Contractual requirements are traceable and verifiable. (4) The contractual requirements baseline is established prior to release of the solicitation package. (5) The acquisition organization has a written policy for establishing and managing the contractual requirements. (6) Responsibility for requirements development and management is designated. (7) Requirements that are mandatory versus optional are clearly delineated and used in deciding what requirements can be eliminated or postponed to meet other project goals, such as cost and schedule constraints. Purpose: To ensure that risks are identified and systematically mitigated. Description: Risk management is the process for identifying potential acquisition problems and taking appropriate steps to avoid their becoming actual problems. It includes risk identification and categorization based on estimated impact, development of risk mitigation strategies, and execution of and reporting on the strategies. Risk management occurs early and continuously in the acquisition life cycle. Activities: (1) Project wide participation in the identification and mitigation of risks is encouraged. (2) The defined acquisition process provides for the identification, analysis, and mitigation of risks. (3) Milestone reviews include the status of identified risks. (4) The acquisition organization has a written policy for managing acquisition risk. (5) Responsibility for acquisition risk management activities is designated. Purpose: To ensure that a quality solicitation is produced, and a best qualified contractor selected. Description: Solicitation is the process for developing, documenting, and issuing the solicitation package; developing and implementing a plan to evaluate responses; conducting contract negotiations; and awarding the contract. Solicitation ends with contract award. Activities: (1) The solicitation package includes the contractual requirements and the proposal evaluation criteria. (2) The technical and management elements of proposals are evaluated to ensure that the requirements of the contract will be satisfied. (3) The selection official selects a supplier who is qualified to satisfy the contract’s requirements. (4) The acquiring organization has a written policy for conducting the solicitation. (5) Responsibility for the solicitation is designated. (6) A selection official has been designated to be responsible for the selection process and decision. (7) The acquiring team includes contracting specialists to support contract administration. Purpose: To ensure proper transfer of the system from the acquisition organization to the eventual support organization. Description: Transition to support is the process for developing and implementing the plans for transitioning products to the support organization. This includes engaging relevant stakeholders in the acquisition and sharing information about the system’s supporting infrastructure. Transition to support begins with requirements development and ends when the responsibility for the products is turned over to the support organization. Activities: (1) The acquiring organization ensures that the support organization has the capacity and capability to provide the required support. (2) There is no loss in continuity of support to the products during transition from the supplier to the support organization. (3) Configuration management of the products is maintained throughout the transition. (4) The acquiring organization has a written policy for transitioning products to the support organization. (5) The acquiring organization ensures that the support organization is involved in planning for transition to support. (6) Responsibility for transition to support activities is designated. Purpose: To ensure that commercial product modification is effectively controlled. Description: Component modification is the process for limiting the chances of a commercial product being modified to the point that it becomes a one-of-a-kind solution because doing so can result in extensive life cycle costs. Such modifications, if not incorporated into the commercially available version of the product by the supplier, mean that every product release has to be modified in accordance with the custom changes, thus precluding realization of some of the benefit of using a commercial product. Activity: (1) Modification of commercial components is discouraged and allowed only if justified by a thorough analysis of life cycle costs and benefits. Purpose: To ensure the integrity and consistency of commercial system components. Description: Configuration management relative to commercial component-based systems is the process for ensuring that changes to the commercial components of a system are strictly controlled. It recognizes that when using commercial components, it is the vendor, not the acquisition or support organization, that controls the release of new component versions and that new versions are released frequently. Thus, acquisition management needs to provide for both receiving new product releases and controlling the implementation of these releases. Activities: (1) Project plans explicitly provide for evaluation, acquisition, and implementation of new, often frequent, product releases. (2) Modification or upgrades to deployed versions of system components are centrally controlled and unilateral user release changes are precluded. Purpose: To ensure that relationships between commercial products are understood before acquisition decisions are made. Description: Dependency analysis relative to commercial component- based systems is the process for determining and understanding the characteristics of these products so that inherent dependencies among them can be considered before they are acquired. It involves recognizing that the logical and physical relationships among products impact one another. This is necessary because commercial products are built around each vendor’s functional and architectural assumptions and paradigms, such as approaches to error handling and data access, and these assumptions and paradigms are likely to be different among products from different sources. Such differences complicate product integration. Further, some commercial products have built-in dependencies with other products that, if not known, can further complicate integration. Activity: (1) Decisions about the acquisition of commercial components are based on deliberate and thorough research, analysis, and evaluation of the components’ interdependencies. Purpose: To ensure reasonable planning for integration of commercial products with existing systems. Description: Legacy systems integration planning is the process for ensuring that the time and resources needed to integrate existing systems with the system being acquired are identified and provided for. It involves identifying which legacy systems will interact with the system being acquired and what kinds and levels of testing are required. Integration planning recognizes that, although some commercial products may provide mechanisms and information that are helpful in integration with legacy systems, the unavailability of the source code for commercial products and the different organizations that are responsible for the two will likely require additional time and effort. Activity: (1) Project plans explicitly provide for the time and resources necessary for integrating commercial components with legacy systems. Purpose: To ensure that the organizational impact of using new system functionality is proactively managed. Description: Organization change management relative to commercial component-based systems is the process for preparing system users for the business process changes that will accompany implementation of the system. It involves engaging users and communicating the nature of anticipated changes to system users through training on how jobs will change. This is necessary because commercial products are created with the developers’ expectations of how they will be used, and the products’ functionality may require the organization implementing the system to change existing business processes. Activities: (1) Project plans explicitly provide for preparing users on the impact that the business processes embedded in the commercial components will have on the user’s respective roles and responsibilities. (2) The introduction and adoption of changes to how users will be expected to execute their jobs are actively managed. Purpose: To ensure that a quality solicitation is produced and a best qualified contractor is selected. Description: Solicitation relative to commercial component-based systems is the process for ensuring that a capable contractor is selected. It involves ensuring that the selected contractor has experience with integrating commercial component products. This is important because expertise in developing custom system solutions is different from expertise in implementing commercial components; it requires different core competencies and experiences to be successful. Activity: (1) Systems integration contractors are explicitly evaluated on their ability to implement commercial components. Purpose: To ensure that system requirements alone do not drive the system solution. Description: Tradeoff analysis relative to commercial product-based systems is the process for analyzing and understanding the tradeoffs among competing acquisition variables so as to produce informed acquisition decision making. It involves planning and executing acquisitions in a manner that recognizes four competing interests: defined system requirements, the architectural environment (current and future) in which the system needs to operate, acquisition cost and schedule constraints, and the availability of products in the commercial marketplace (current and future). This analysis should be performed early and continuously throughout an acquisition’s life cycle. Activity: (1) Investment decisions throughout a system’s life cycle are based on tradeoffs among the availability of commercial products (current and future), the architectural environment in which the system is to operate (current and future), defined system requirements, and acquisition cost/schedule constraints. Purpose: To ensure that vendor and product characteristics are understood before acquisition decisions are made. Description: Vendor and product research and evaluation relative to commercial component-based systems is the process for obtaining reliable information about both the product being considered and the vendor offering the product. It involves taking additional steps beyond vendor representations, such as obtaining information about the vendor’s history, obtaining information on the vendor’s business strategy relative to evolution and support of the product, and evaluating copies of the product in a test environment. Activities: (1) Commercial component and vendor options are researched, evaluated/tested, and understood, both early and continuously. (2) A set of evaluation criteria for selecting among commercial component options is established that includes both defined system requirements and vendor/commercial product characteristics (e.g., customer satisfaction with company and product line). Table 9 contains our assessment of the FBI’s efforts for contract tracking and oversight for both the prime contractor and the sub-contractors. In addition to those named above, Monica Anatalio, Tonia Brown, Carol Cha, Neil Doherty, Jennifer Echard, Nancy Glover, Daniel Gordon, Jim MacAuley, Paula Moore (Assistant Director), Karen Richey, Teresa Tucker, Kevin Walsh, and Jeffrey Woodward made key contributions to this report. | The Sentinel program is intended to replace and expand on the Federal Bureau of Investigation's (FBI) failed Virtual Case File (VCF) project and thereby meet the bureau's pressing need for a modern, automated capability to support its field agents and intelligence analysts' investigative case management and information sharing requirements. Because of the FBI's experience with VCF and the importance of Sentinel to the bureau's mission operations, GAO was asked to conduct a series of reviews on the FBI's management of Sentinel. This review focuses on the FBI's (1) use of effective practices for acquiring Sentinel and (2) basis for reliably estimating Sentinel's schedule and costs. To address its objectives, GAO researched relevant best practices, reviewed FBI policies and procedures, program plans and other program documents, and interviewed appropriate program officials. The FBI is managing its Sentinel program according to a number of key systems acquisition best practices. For example, the FBI has followed best practices when soliciting offers from contractors to lead the development of Sentinel; it has also followed the practices in evaluating the offers and making a contract award decision. In addition, it has established and is following effective processes to proactively identify and mitigate program risks before they have a chance to become actual cost, schedule, or performance problems. Further, it has taken a range of steps to effectively define expectations for its prime contractor and to measure performance against these expectations and related incentives and hold the contractor accountable for results. However, the bureau has not done the same for one key aspect of tracking and overseeing its program management support contractors. In particular, it has not established performance and product quality standards for these support contractors. According to FBI officials, such standards are not necessary because they monitor their support contractors on a daily basis, including the review and approval of all work products. By not implementing this practice, GAO believes that the FBI's monitoring does not adequately ensure that Sentinel support contractors are performing important program management functions effectively and efficiently. The FBI's policies, procedures, and supporting tools that form the basis for Sentinel's schedule and cost estimates do not adequately reflect key best practices. While the FBI has issued an information technology (IT) program management handbook, related guidance, and tools that define how IT program schedules and costs are to be estimated, this handbook and related material do not, for example, address such key practices as having a historical database of program schedule and cost estimates to inform future estimates. As a result, the reliability of Sentinel's schedule and cost estimates is questionable. GAO's analyses of the Sentinel cost estimates and program officials' statements confirm this. For example, the analyses show that the estimates do not include all relevant costs, such as a technology refresh, and are not grounded in fully documented methodologies or a corporate history of experiences on other IT programs. FBI officials agreed that they need to update their IT program management handbook and related materials to incorporate schedule and cost estimating best practices and to establish a historical database of its estimating experiences on IT programs. Until FBI takes these steps, IT programs, such as Sentinel, are unlikely to have reliable schedule and cost estimates to support informed investment decision making, and their actual progress is unlikely to track closely to estimates. |
OJP’s bureaus and offices provide grants and other awards to various organizations, including state and local governments, universities, and private entities, which are intended to develop the nation’s capacity to prevent and control crime, administer justice, and assist crime victims. Within OJP, NIJ serves as DOJ’s research and development agency and provides evidence-based knowledge and tools to address crime and justice challenges, particularly at the state and local levels. As part of this mission, NIJ provides awards from the DNA and forensic program appropriation and administers these awards for the purpose of DNA analysis and capacity enhancement and for other forensic science purposes. Within NIJ, the Office of Investigative and Forensic Sciences is responsible for administering awards for these purposes. According to OJP, approximately seven NIJ staff members within the Office of Investigative and Forensic Sciences are responsible for managing and monitoring funds associated with the DNA and forensic program appropriation. NIJ prioritizes initiatives it will fund from the DNA and forensic program appropriation on an annual basis and provides awards through various funding mechanisms, including grants and nongrant agreements as described below. NIJ’s award mechanism varies depending on the type of initiative being funded and the type of recipient receiving the funds. NIJ formula discretionary grants: Awards provided under a formula set by DOJ and based primarily on the violent crime rate. The DNA Backlog Reduction Program is the only formula grant program awarded through the DNA and forensic program appropriation. Other discretionary grants: Awards provided to eligible entities, which vary depending on the purpose and requirements of the award. Such grants may be awarded to state and local governments, public and private universities, as well as for-profit and nonprofit organizations. NIJ’s other discretionary grants are generally awarded on a competitive basis. Interagency agreements: Awards between federal agencies establishing an agreement for projects that may cover similar topics and initiatives as NIJ’s other discretionary grants. Federal agencies may not receive funds through grants; however, they may compete for some awards, if eligible, or receive non- competitive awards, if determined to be appropriate for NIJ’s DNA and forensic science activities. Contracts: Agreements with various nongovernment entities, such as private laboratories, for various services, such as conducting certain DNA analytical services or providing other technical scientific services. NIJ funds grants through solicitations, which are formal requests for funding proposals outlining goals, eligibility requirements, and the instructions for applying to receive grant funding. Federal agencies may apply under such solicitations if eligible; however, in such cases, NIJ awards federal agencies funds through interagency agreements, and not through the solicitation and grant-making process. Contracts are not funded through the typical grant-making process, but through requests for contract proposals. For the purposes of this report, “initiatives” include all awards NIJ funded from the DNA and forensic program appropriation, including awards funded through grants, interagency agreements, and contracts. NIJ provides funds under various initiatives for the purposes of reducing the DNA backlog and for other forensic sciences needs, including research and development and forensic science training. Table 1 describes NIJ’s initiatives funded from fiscal years 2008 through 2012 through the DNA and forensic program appropriation that directly benefit state and local government DNA-related efforts to reduce backlogs and build capacity. Throughout this report, we refer to these initiatives as the DNA backlog initiatives. Table 2 describes NIJ’s additional initiatives funded through the DNA and forensic program appropriation that do not directly benefit the state and local government DNA analysis backlog, but address other DNA and forensic science challenges identified by the agency from fiscal years 2008 through 2012. OJP policy directs that monitoring be performed to assess the performance of programs that support NIJ initiatives. For example, for grants, during programmatic monitoring, grant managers review qualitative information (such as progress reports submitted by grantees and supporting documentation on grantee program implementation), and quantitative information (such as performance measurement data submitted by grantees), to determine grant progress and performance. In grant applications, grantees are required to propose grant goals that support NIJ’s stated program purpose, the activities through which they aim to achieve those goals, and an implementation plan describing timelines and steps for the activities. For interagency agreements and contracts, OJP and NIJ officials determine the type of monitoring documents, such as progress reports, required based on the goals and objectives of each specific award. From fiscal years 2008 through 2012, Congress appropriated approximately $691 million to NIJ to provide grant and other awards for state and local governments to reduce the DNA backlog and increase DNA lab capacity, as well as for other forensic science purposes. The appropriations language was broad and enabled NIJ to allocate funding for a variety of forensic programs at funding levels established by the agency. As a result, NIJ allocated funds for both its DNA backlog initiatives and other forensic science initiatives based on NIJ’s mission and annual budgeting priorities. For instance, over the 5-year period, NIJ provided funding through the DNA Backlog Reduction Program, other discretionary DNA backlog initiatives such as analyzing DNA samples from cold cases, and other forensic science initiatives awarded through grants and nongrants including research, development, and evaluation; forensic science training; and support for the development of best practices. NIJ allocated the majority—about 64 percent, or $442 million of the available $691 million in the DNA and forensic program appropriations— to DNA backlog initiatives. Our analysis of the data shows that about $343 million of this $442 million was awarded through the DNA Backlog Reduction Program, and the remaining approximately $98 million was awarded through its other DNA backlog initiatives. According to NIJ officials, these awards directly affected the DNA backlog by providing funds to state and local entities for either analyzing DNA samples or increasing the capacity of state and local laboratories to conduct DNA sample analyses. Additionally, NIJ awarded approximately 31 percent, or $212 million of the available $691 million, to other DNA and forensic science purposes that do not directly reduce the DNA backlog. NIJ officials stated that funding from some of these initiatives may have indirect or long-term benefits for reducing the DNA backlog. The remainder of the funding, $38 million, went toward other activities, such as management and administration. See figure 1. We further analyzed the 64 percent of appropriated funding that went toward the DNA Backlog Reduction Program and other DNA backlog initiatives over the 5-year period. We found that, annually, NIJ generally has increased the percentage of funding provided to the DNA Backlog Reduction Program, while the overall amount of the DNA and forensic program funds available through appropriations and prior year carryover decreased from a high of about $153 million in fiscal year 2010 to $118 million in fiscal year 2012. For instance, in fiscal year 2008, NIJ awarded about 36 percent of the total amount available to NIJ to obligate for the DNA Backlog Reduction Program. By 2012, NIJ had increased the amount to about 63 percent of the total available for obligation. At the same time, funds awarded through its other DNA backlog initiatives generally decreased. See figure 2 for the proportion of funds provided for the DNA Backlog Reduction Program and other backlog initiatives by fiscal year. NIJ officials stated that the increased funding for the DNA Backlog Reduction Program was primarily because the agency decided to prioritize the program more than its other initiatives. In addition, NIJ officials stated that funding also increased when the agency subsumed the Convicted Offender and/or Arrestee DNA Backlog Reduction Program into the larger DNA Backlog Reduction Program starting in 2011. As a result, the new program had a larger scope and provided more money to grantees through NIJ’s continued emphasis on this program. We also analyzed the breakdown of the grants and nongrant awards for other forensic science initiatives that do not directly contribute to reducing the DNA backlog. From fiscal years 2008 through 2012, this amounted to about 31 percent of the $691 million appropriated, or about $212 million. See figure 3 below for an analysis of funds awarded to other forensic science initiatives. We further analyzed grant and award description information across the initiatives that do not directly contribute to reducing the DNA backlog to determine whether they were DNA-related, and our analysis showed that more than $121 million of the approximately $212 million for other forensic science initiatives covered a range of DNA-related projects such as research, development, and evaluation to improve the ability to analyze aged or compromised DNA samples, DNA training for state and local laboratories, and DNA-related initiatives such as the National Missing and Unidentified Persons System. NIJ officials stated that the agency funded these projects because they provide valuable services and resources to practitioners solving crimes with DNA. For instance, DNA-related research can lead to faster and better methods for recovering and analyzing DNA. Also, NIJ officials stated that the agency’s mission includes facilitating faster and better DNA-related knowledge transfer across the country. As a result, several of its technical assistance initiatives covered a facet of NIJ’s commitment to training and information sharing. NIJ officials stated they consider these DNA-related initiatives to be a form of DNA capacity enhancement and an important part of the strategy to reduce the DNA backlog. The remaining amount—approximately $90 million of the $212 million— was awarded to support other areas of forensic science. This represents about 13 percent of the $691 million appropriated and includes efforts such as research, development, and evaluation of faster and less expensive methods to detect drugs or explosives at crime scenes and training for cell phone and other digital information forensic evidence. NIJ officials stated that NIJ has authority to support the criminal justice community in general forensic science endeavors, which is important because lab technicians must process many types of forensic evidence. NIJ officials added that cases may be solved through many different types of evidence and manners of processing, not just through DNA. They stated that faster and better analysis of other forensic evidence may help increase the amount of time lab technicians have to analyze DNA samples and indirectly benefit how many DNA cases can be completed. NIJ has a process in place for determining its annual priorities for the allocation of DNA and forensic program appropriation funds; however, NIJ does not clearly document this process. According to NIJ officials, NIJ staff use the prior fiscal years’ funding as a starting point to make a proposed initial estimate of the amount to be allocated to the DNA Backlog Reduction Program. Specifically, NIJ staff examine the amount of funds remaining on active formula grant awards from prior fiscal years, and then use historical funding data to determine what is needed to fund eligible applicants for the next fiscal year. After the proposed initial allocation for the DNA Backlog Reduction Program has been determined, NIJ staff then develop an initial recommendation for how the estimated remaining funding will be allocated to other DNA and forensic initiatives. In addition, NIJ uses the professional expertise of its forensic staff, as well as input from NIJ-sponsored Technology Working Groups (TWG). These groups are committees of 25 to 30 experienced practitioners from local, state, tribal and federal agencies and laboratories associated with a particular NIJ technology investment portfolio, such as DNA Forensics or General Forensics, that help NIJ determine the criminal justice technology needs of the field. After NIJ staff arrive at proposed allocations among the initiatives, staff brief the Director of NIJ with documents—such as budget briefing slides or funding memos—that outline their priority areas. The NIJ Director then determines the initial allocation among the various initiatives. NIJ next begins the process of implementing these initiatives through solicitations. According to NIJ officials, the solicitations are developed by NIJ staff possessing the relevant subject matter expertise, in consultation with Office of Investigative and Forensic Sciences leadership and with input from the forensic science TWGs. NIJ then posts each solicitation to the NIJ web site and either a federal government grant website or an OJP web-based system in order to accept proposals from qualified applicants applying for funding. Qualified proposals submitted in response to a competitive solicitation are subjected to a peer review process and evaluated for their scientific merit. Funding recommendations for individual proposals are, in part, informed by peer review and, in the case of forensics-related research proposals, the needs and priorities identified by the TWGs. According to NIJ officials, the rationale for funding allocation decisions, such as the amount to be used for the DNA Backlog Reduction Program, is documented in the briefing slides and funding memos that are presented to the NIJ Director. However, according to our review, these documents do not consistently or adequately demonstrate NIJ’s rationale for how funding priorities are determined. Specifically, from fiscal years 2008 through 2011, NIJ used budget briefing slides to present funding priorities to the NIJ Director. These briefing slides showed the various initiatives and the amount of funding NIJ proposed to allocate to each initiative, but they did not consistently provide justifications for how or why NIJ determined these amounts. For fiscal years 2008 and 2009, the briefing slides included rationale sections that were descriptions of the funding being allocated rather than justifications for why NIJ chose to allocate specific funding amounts to each initiative. For example, for fiscal year 2008, for the Forensic Technology Center of Excellence initiative, NIJ provided an itemized list of the funding request and projects in the rationale section. For fiscal year 2010, the budget briefing document outlined recommended changes to the DNA Backlog Reduction Program, such as adjusting minimum funding levels available to states and units of local government. However, the document did not include any rationale for this decision. For fiscal years 2012 and 2013, NIJ changed its process from using briefing slides to using funding memos to present funding prioritization decisions to the NIJ Director. While these memos show the final amounts NIJ decided to allocate to various initiatives, they do not provide details on the justifications for how funding levels were determined for each initiative. Further, although NIJ had a category for rationale in the fiscal year 2008 and 2009 briefing slides, this practice ended beginning with fiscal year 2010. According to NIJ officials, there was no longer a need to include a rationale in the briefing slides because the briefings to the NIJ Director began occurring later in the prioritization process and the Director’s signature, indicating approval of funding prioritization decisions, had already been obtained. Standards for Internal Control in the Federal Government states that internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. The documentation should appear in management directives, administrative policies, or operating manuals, and all documentation and records should be properly managed and maintained. The standards also state that transactions and significant events are to be clearly documented to help management with decision making and to help ensure operations are carried out as intended. According to NIJ officials, the budget briefing slides for fiscal years 2008 through 2011 and the funding memos for fiscal years 2012 and 2013 are the only documents the agency uses to show its rationale for prioritization of the DNA and forensic program appropriation. Additionally, in light of budget uncertainty from year to year, NIJ officials believe their current process is the most useful because it allows the agency flexibility for making decisions. However, without a clearly documented process that demonstrates the rationale for how NIJ is prioritizing its DNA and forensic program appropriation, there is limited transparency regarding how and why the agency is allocating its funding. In addition, documenting the agency’s rationale for prioritizing funding—regardless of the timing of the briefing to the NIJ Director—would be worthwhile so that there is a record of the agency’s decision. Furthermore, the significant amount of funding NIJ administers under this appropriation, as well as the continuing demand for DNA analysis, highlights the importance of ensuring transparency when it comes to determining priorities for funding allocations. NIJ has processes in place to assess progress of the DNA Backlog Reduction Program, but does not have an approach to verify performance data submitted by grantees so as to reduce error rates. NIJ also has a performance measure to assess the results of this program, but data are lacking to determine whether efforts are having a measurable impact in reducing the DNA backlog. NIJ assesses performance of the DNA Backlog Reduction Program by requiring grantees to submit reports every 6 months outlining their progress in, among other things, meeting the program goals and objectives established in their initial applications for funding. A key component of these progress reports is data on grant results— performance data—that outline grantee progress such as in analyzing cases using NIJ funds. NIJ program managers are responsible for reviewing performance measure data to assess progress in meeting grantee goals and objectives. NIJ also assesses grantee progress and performance by conducting monitoring activities that include, among other things, desk reviews and in-depth monitoring activities. While NIJ has developed performance measures for its grant programs and collects performance measurement data from its grantees, the agency does not have an approach to verify the reliability of the data—a process of checking or testing performance data to reduce the risk of using data that contain significant errors—and, as a result, faces continuing data errors. In October 2011, based on NIJ’s review of progress reports, NIJ noted that, with respect to the DNA Backlog Reduction Program, 30 percent of progress reports submitted by grantees in 2011 had errors in the collection and reporting of data, contained inaccurate data, or lacked goals and updates on progress achieved. Furthermore, according to an NIJ review of site visits it conducted in 2010, NIJ identified many issues with how data are collected and, many times, NIJ found data that were neither accurate nor auditable. In response to these concerns, in October 2011, NIJ provided grantees with additional guidance for preparing data collection plans and required them to explain any changes in data that they had previously submitted. NIJ also began requiring grantees to use an updated progress report form and provided an updated spreadsheet with pre-populated math formulas for reporting data on cases and samples analyzed to minimize data reporting errors. However, in March 2013, a year and a half after these actions, NIJ officials stated that they still estimate that 30 percent of progress reports submitted by grantees contain errors. NIJ officials noted that progress reports are sent back to grantees to correct mistakes, and the grantees are in turn required to send the reports back to NIJ for review and approval. OJP requires that award recipients collect data that are appropriate for facilitating reporting requirements for GPRA, as amended, and that valid and auditable source documentation is available for such data. Office of Management and Budget (OMB) guidance states that in order to assess progress toward achievement of performance goals, performance data must be appropriately accurate and reliable for intended use. OMB’s guidance further states that verification and validation of performance data support the general accuracy and reliability of performance information, reduces the risk of inaccurate performance data, and provides a sufficient level of confidence to Congress and the public that the information presented is credible as appropriate to its intended use. In addition, our work on performance assessment has identified that data verification helps to ensure that users can have confidence in the reported performance information because it provides a mechanism for assessing data completeness, accuracy, consistency, and timeliness, among other things. NIJ officials stated that they had not taken action to verify performance data because NIJ does not have access to original data to check that data are being reported correctly. As a result, officials stated that they primarily rely on grantees to submit reliable data. Officials also noted that they do not have the resources to systematically verify the reliability of data reported because, on average, each program manager is responsible for monitoring about 200 awards. Officials explained that as part of monitoring efforts—enhanced programmatic desk reviews, site visits, and review of progress reports—program managers will spot-check data for any anomalies and will follow up with grantees in cases where the data seem inaccurate. Ensuring the reliability of the data is especially important in light of the fact that the DNA Backlog Reduction Program is NIJ’s largest investment. Furthermore, NIJ reports performance data in OJP’s annual Performance Budget to show progress in reducing the DNA backlog. Without an approach to verify grantee-reported data, NIJ cannot provide assurance that grantees have valid and auditable source documentation for the data they report, as required by OJP. In addition, NIJ officials are not required to verify performance data when reviewing progress reports. NIJ estimated that 30 percent of progress reports were sent back to grantees for correction because they contained errors. However, because program managers are not required to verify performance data, NIJ cannot be certain that the remaining 70 percent of progress reports were free of data errors. As a result, NIJ cannot provide a sufficient level of confidence to Congress and the public that performance data associated with the DNA Backlog Reduction Program are reliable enough to show that the program is successfully meeting its goals or reducing the DNA backlog. As part of its site visit monitoring efforts, during which time NIJ officials have access to grantees’ original source data, NIJ could, for example, assess whether additional edit checks are needed to better ensure data are reliable. Although there could be a cost associated with such efforts, defining a cost-effective approach to verify its performance measurement data would better position NIJ to help ensure that it is providing quality information to the public, internal agency officials, and congressional decision makers who play a role in determining where to allocate NIJ funding resources. We also found that the performance measure NIJ uses to measure results of the DNA Backlog Reduction Program may yield an incomplete picture. The performance measure, reported as “percent of reduction in DNA backlog casework,” is a projection of DNA casework that grantees expect to complete as opposed to an actual tabulation of completed cases. Using data submitted by grantees, NIJ calculates the number of cases grantees expect to test with future funding, divided by the DNA casework backlog reported by grantees at the end of the calendar year. In fiscal year 2011, for example, the reported percent reduction was 32.9 percent and was based on a calculation of the estimated number of cases grantees expected to be completed, divided by the total DNA casework backlog. Grantees submit their estimated number of cases to NIJ in their funding applications before they are awarded the grants and begin work. Further, these grantees have up to 3 years to complete their work. NIJ officials explained that reducing the DNA backlog is a DOJ goal in support of GPRA and the agency reports this measure in OJP’s annual Performance Budget. According to officials, the DNA Backlog Reduction Program is NIJ’s program with the most immediate impact in reducing the DNA backlog. However, NIJ’s performance measure does not demonstrate actual results, as required by GPRA, as amended. In addition, NIJ has established a target of 25 percent reduction in DNA backlog casework, which NIJ officials stated that they establish based on historical knowledge. However, NIJ’s target is also a projection of the DNA casework that grantees expect to achieve based on estimates submitted in grantee applications. Our prior work on GPRA states that agencies that were successful in measuring performance strived to establish performance measures that, among other things, enable an organization to assess accomplishments, make decisions, realign processes, assign accountability, and demonstrate results so as to tell each organizational level how well it is achieving its goals. In addition, in our work assessing performance measures, we identified that performance measures should provide useful information for decision making by providing managers and other stakeholders timely, action-oriented information in a format that helps them make decisions that improve program performance. Measures that do not provide managers with useful information will not alert managers and other stakeholders to the existence of problems or help them respond when problems arise. According to NIJ officials, the agency is unable to develop a performance measure that reports actual cases completed (on the fiscal year basis called for in OJP’s annual budget submission) under the DNA Backlog Reduction Program because grantees have up to 3 years to complete their work and the completed number of cases for the entire grant period is not known until the grant period closes. In addition, NIJ officials explained that they would prefer a more meaningful measure, but the current measure captures NIJ’s best guess of the percentage reduction in DNA backlog casework. While measuring annual performance for multi-year grants can be challenging, NIJ could take steps to better assess the results of DNA backlog efforts by analyzing performance data on actual cases completed, which the agency already collects from grantees every 6 months as part of the grantees’ progress reports. In fact, for grants that have not yet closed, NIJ has already started analyzing actual performance data to identify grantees’ annual progress in meeting goals. Such data, once sufficiently reliable, could help NIJ to better assess actual results and develop a more accurate performance measure. Officials from OJP’s Office of the Chief Financial Officer—the office responsible for reporting performance measure information—stated that in order to modify a measure, NIJ would need to propose the change and OJP would submit the proposed revised measure, with approval of other components of DOJ, to OMB in the OJP annual budget submission. By revising its performance measure to include casework actually completed, NIJ will be better situated to provide decision makers with timely, action-oriented information that helps them make decisions that improve program performance or that alerts them to the existence of problems so they can respond to them when they arise. As reported by DOJ, federal funds to address the persisting backlogs of untested DNA samples, among other things, have provided needed support to state and local laboratories to help resolve criminal cases. However, DOJ could take additional steps to improve transparency and better assess results. While NIJ has a process to determine funding priorities, documenting the agency’s rationale for funding allocation decisions, as recommended by standards for internal control in the federal government, NIJ could enhance transparency of the agency’s funding priorities. DOJ has a process in place, as well as a performance measure, to assess results of NIJ’s DNA Backlog Reduction Program, but the agency could verify data and use actual outcomes, as required by federal requirements, to attain reasonable assurance that funds are having a measureable impact in reducing DNA backlogs. We recommend that the Director of NIJ take the following three actions. In order to provide stakeholders and Congress greater transparency regarding its funding allocations, we recommend that the Director of NIJ document the rationale for its annual funding priorities. In order to assist Congress and NIJ management and stakeholders to better assess whether NIJ’s DNA Backlog Reduction Program is having a measurable impact in reducing the DNA backlog, we recommend that the Director of NIJ take the following two actions: develop a cost-effective approach to verify performance data submitted by grantees to provide reasonable assurance that such data are sufficiently reliable to report progress in reducing the DNA backlog, and, revise the “percent of reduction in DNA backlog casework” performance measure to include casework actually completed as part of the measure instead of casework that is projected. We provided a draft of this report to DOJ for review and comment. DOJ provided written comments, which are reproduced in full in appendix II, and technical comments, which we incorporated as appropriate. DOJ agreed with all three of the recommendations and outlined steps to address them. With respect to the first recommendation, OJP stated that in fiscal year 2014, the Director of NIJ will begin documenting the rationale for the estimated initial allocation of funds appropriated (or anticipated to be appropriated) for DNA analysis and capacity enhancement program efforts, and for other local, state, and federal forensic activities. Regarding the second recommendation, OJP stated that once NIJ revises the performance measure for the NIJ DNA Backlog Reduction Program (in response to our third recommendation), NIJ will begin developing a cost-effective approach to provide reasonable assurance that data collected from grantees, in support of the new or revised performance measure, are sufficiently reliable to report program progress. Finally, for the third recommendation, OJP stated that the Director of NIJ will undertake efforts to revise the performance measure for the NIJ DNA Backlog Reduction Program and anticipates that the new or revised performance measure will reflect actual cases competed. OJP also noted that the new or revised performance measure will be subject to review and/or approval of other DOJ components as the well as the Administration. We are sending copies of this report to the Attorney General, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841, mackinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The Department of Justice’s (DOJ) National Institute of Justice (NIJ) was appropriated about $691 million from fiscal years 2008 through 2012 for the DNA and forensic program, which is administered by its National Institute of Justice (NIJ). NIJ awarded almost two-thirds of all funds—or about 63 percent—to either state or local units of government from the DNA and forensic program appropriation for fiscal years 2008 through 2012, as seen in figure 4. Of the about $691 million, about $442 million, or 64 percent, went to directly benefit state and local units of government to reduce the DNA backlog. Of the $442 million, about 96 percent, or about $422 million, was allocated directly to either state or local units of government through NIJ’s DNA backlog initiatives, including NIJ’s DNA Backlog Reduction Program. Three percent—or about $14 million—was awarded to public universities to conduct DNA analyses and increase lab capacity on behalf of state and local governments, and an additional 1 percent—about $6 million—was awarded to for-profit businesses to conduct similar analyses for certain populations of DNA samples. Approximately $212 million of $691 million was awarded to various entities for purposes that do not directly support state and local governments’ efforts to reduce the DNA backlog, but for which there may be an indirect benefit to the reduction of DNA backlogs. For example, of the $212 million, public colleges and universities received about $63 million for initiatives that do not directly benefit state and local government efforts to reduce DNA backlogs. See table 3. In addition to the contact named above, Dawn Locke (Assistant Director), Joel Aldape, Brian Lipman, and Jeremy Manion made significant contributions to the work. Also contributing to the report were Michele Fejfar, Grant Mallie, Jessica Orr, and Janet Temko. | Since 2008, Congress has appropriated more than $100 million each year to the Department of Justice (DOJ) that may be used, among other things, to reduce DNA backlogs and enhance crime laboratory capacity. NIJ, within DOJ, is responsible for, among other things, providing awards for DNA analysis and forensic activities. NIJ's DNA Backlog Reduction Program was established to provide grants to state and local governments with the intent, in part, of reducing the backlog of DNA samples. The conference report accompanying the Consolidated and Further Continuing Appropriations Act, 2012, mandated GAO to examine, among other things, DNA analysis funds. This report addresses (1) how NIJ has allocated its DNA and forensic program appropriation over the past 5 fiscal years, (2) the extent that NIJ has a process to determine its funding priorities for its DNA and forensic program appropriation, and (3) the extent that NIJ verifies data on grant results submitted by grantees and measures the outcomes of the DNA Backlog Reduction Program. GAO reviewed relevant appropriations, NIJ funding documentation, and data from fiscal years 2008 through 2012, and interviewed NIJ officials. The National Institute of Justice (NIJ) allocated funding for various DNA and other forensic science activities, with the majority of the available $691 million from fiscal years 2008 through 2012 going to state and local governments to reduce the DNA backlog. Specifically, over this 5-year period, 64 percent was allocated through initiatives that directly benefit state and local efforts to reduce DNA backlogs and build DNA analysis capacity. The largest initiative was NIJ's DNA Backlog Reduction Program, and other DNA backlog initiatives included DNA analysis of cold cases, among others. A smaller portion (31 percent) went to other forensic sciences initiatives, such as research and development and training, although NIJ officials stated that funding these initiatives may have long-term benefits for reducing the DNA backlog. The remainder of the funding went toward other activities, such as management and administration. NIJ has a process in place to determine DNA and forensic program funding priorities, but its decisions regarding these priorities are not clearly documented. According to NIJ officials, the rationale for funding the DNA Backlog Reduction Program versus other initiatives is documented in briefing slides, but these documents do not show NIJ's rationale for how funding priorities are determined. For example, while the budget documents for fiscal years 2012 and 2013 show the final amounts NIJ decided to allocate to various initiatives, these documents do not provide details on the justifications for how funding levels were determined for each initiative. Without a clearly documented process that demonstrates the rationale for NIJ's annual funding priorities, there is limited transparency regarding how and why the agency is allocating its funding. NIJ could verify data and revise its performance measure to better assess the DNA Backlog Reduction Program. NIJ assesses performance of this program by requiring grantees to submit reports every 6 months that, in part, outline their progress in meeting program goals and objectives. However, NIJ does not have an approach to verify the reliability of the data--testing data to ensure data quality--and as a result, faces continuing data errors. Verifying these data would help ensure that the data are reliable enough to show that the program is successfully meeting its goals. In addition, NIJ has a performance measure to assess the results of this program--percent of reduction in DNA backlog casework--but it is a projection of DNA casework that grantees expect to complete as opposed to an actual tabulation of completed cases. While measuring annual performance for multiyear grants can be challenging because the completed number of cases is not known until after the grant period closes, taking steps to analyze performance data on actual cases completed could help NIJ to better assess actual results. GAO recommends that NIJ clearly document the rationale for annual funding priorities, develop a cost-effective approach to verify the reliability of grantee performance data, and revise its performance measure to reflect actual completed cases. DOJ agreed with GAO's recommendations and outlined steps to address them. |
In February 1994, the Attorney General and INS Commissioner announced a five-part strategy to strengthen enforcement of the nation’s immigration laws. The strategy’s first priority was to strengthen enforcement along the southwest border. The strategy to strengthen the border called for “prevention through deterrence,” that is, raising the risk of apprehension for illegal aliens to “make it so difficult and so costly to enter this country illegally that fewer individuals even try.” The objectives of the strategy were to close off the routes most frequently used by smugglers and illegal aliens (generally through urban areas) and shift traffic through the ports of entry that inspect travelers or over areas that were more remote and difficult to cross. With the traditional routes disrupted, INS expected that illegal alien traffic would either be deterred or forced over terrain less suited for crossing, where INS believed it would have the tactical advantage. To carry out the strategy, the Border Patrol was to concentrate personnel and resources in a four-phased approach starting with the areas of highest illegal alien activity, increase the time Border Patrol agents spend on border-control activities, make maximum use of physical barriers, and identify the appropriate quantity and mix of technology and personnel needed to control the border. To complement the Border Patrol’s efforts, the strategy called for INS Inspections to enhance efforts to deter illegal entry at the ports of entry and increase the use of technology to improve management of legal traffic and commerce. INS’ Border Patrol and Inspections are the two components chiefly responsible for deterring illegal entry along the southwest border. These two components represented 28 percent of INS’ total budget of $3.8 billion in fiscal year 1998. INS also provides support for the strategy by allocating funds to other INS programs for computer automation, technology procurement, construction of facilities and barriers, and detention and removal of illegal aliens. INS’ Border Patrol is responsible for preventing and detecting illegal entry along the border between the nation’s ports of entry. The Border Patrol is divided into 21 sectors, 9 of which are along the southwest border. The Border Patrol’s budget for fiscal year 1998 was $877 million, a 20-percent increase over its fiscal year 1997 budget of $730 million. As of September 1998, there were about 8,000 Border Patrol agents nationwide. About 7,400, or 93 percent, were located in the 9 sectors along the southwest border. (App. I contains detailed staffing and selected workload data for the Border Patrol.) INS Inspections and the U.S. Customs Service share responsibility for inspecting all applicants seeking admission at U.S. ports of entry. Among other things, these inspections are to prevent the entry of inadmissible applicants by detecting fraudulent documents, including those representing false claims to U.S. citizenship or permanent residence status. INS’ Inspections fiscal year 1998 budget for land-border inspections was about $171 million, a 12-percent increase over its fiscal year 1997 budget of about $152 million. As of September 30, 1998, Inspections had about 2,000 inspectors at land ports of entry nationwide, of which about 1,500 were located at the southwest border land ports of entry. In fiscal year 1998, INS and Customs inspectors along the southwest border inspected about 303 million people, including 213 million--or 70 percent--who were aliens, and 90 million--or 30 percent--who were U.S. citizens. (App. I contains detailed staffing and selected workload data for INS Inspections.) To determine the progress made in implementing the strategy during fiscal year 1998, we (1) analyzed INS staff allocations to determine if they were consistent with its strategy, (2) reviewed INS performance reviews of its fiscal year 1998 Priorities and Performance Management Plan, (3) analyzed INS’ budget and Border Patrol and Inspections workload data, and (4) interviewed INS Border Patrol and Inspections headquarters officials. Also, we reviewed a study commissioned by the Office of National Drug Control Policy (ONDCP), which estimated the number of Border Patrol agents needed to control the southwest border. In addition, we reviewed a Department of Justice Office of Inspector General’s (OIG) report on INS’ implementation of its automated biometrics identification system (IDENT) along the southwest border. To determine the strategy’s interim effects, we analyzed INS data on apprehensions made along the southwest border and the number of persons apprehended while attempting to enter the United States illegally at the southwest border land ports of entry. We also reviewed sections from INS’ performance reviews of its fiscal year 1998 Priorities and Performance Management Plan that reported on the strategy’s interim effects. To determine what actions have been taken to implement our recommendation that INS develop and implement a comprehensive evaluation of the strategy, we obtained written comments on INS’ evaluation plans from INS’ Executive Associate Commissioner for Policy and Planning; and we discussed the comments with an official from INS’ Office of Policy and Planning. We did not independently verify the validity of INS computer-generated workload or apprehensions data. However, as we did for our first report,we discussed with INS officials their data validation efforts. These officials were confident that the data could be used to accurately portray trends over time. We conducted our work between August 1998 and February 1999 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the Attorney General. The Attorney General did not provide comments but instead requested INS to respond to our request. INS’ oral comments are discussed on page 28. During fiscal year 1998, INS continued to make progress toward implementing the Attorney General’s strategy. As called for in the strategy, INS allocated its new Border Patrol agent positions according to its four- phased approach and increased the amount of time agents spent on border enforcement activities. INS constructed additional fencing along the southwest border and continued to deploy technologies such as night vision devices and remote video surveillance systems. Further, INS was testing a computer model designed to determine the appropriate mix of staffing, equipment, and technology in Border Patrol sectors. During fiscal year 1998, INS completed phase I of its strategy, which called for concentrating resources in the San Diego, CA, and El Paso, TX, sectors, and transitioned to phase II, which called for increasing resources in the Tucson, AZ, sector and three sectors in south Texas—Del Rio, Laredo, and McAllen, according to INS officials. Consistent with the strategy, INS allocated 740 (74 percent) of the additional 1,000 Border Patrol agent positions authorized in fiscal year 1998 to phase II sectors in Arizona and Texas. The strategy was designed to allow for flexibility in responding to unexpected changes in the illegal immigration flow. To address an increase in the number of apprehensions of illegal aliens in the El Centro, CA; Yuma, AZ; and Marfa, TX, sectors, INS allocated 215 agents authorized in fiscal year 1998 to these sectors, even though these sectors were not originally scheduled to receive resources until phase III of the strategy. As a result of these and previous staff increases, the number of agents along the southwest border increased from 3,389 as of October 1993 to 6,315 as of September 1997 to 7,357 as of September 1998, an increase of 117 percent between October 1993 and September 1998. Figures 1, 2, and 3 show the increase in the number of agents in sectors along the southwest border during this period. To accommodate the increased number of Border Patrol staff, INS budgeted almost $29 million in fiscal year 1998 for the expansion and replacement of older Border Patrol facilities. INS’ hiring of Border Patrol agents is slowing, despite congressional direction that INS continue hiring and a study that estimated that the Border Patrol may need substantially more agents along the southwest border. The 1996 Act states that the Border Patrol shall hire 1,000 agents each year for fiscal years 1997 through 2001. In addition, a study commissioned by ONDCP estimated that the Border Patrol would need about 16,100 agents in the 9 southwest border sectors to control and deter unauthorized crossings. This number is more than twice the 7,357 agents on board along the southwest border as of September 1998. INS does not expect to meet Congress’ requirement that it hire 1,000 Border Patrol agents each year. INS brought on board 449 new Border Patrol agents between the end of September 1998 and the middle of March 1999. However, INS lost 377 agents during the same time period, resulting in a net gain of 72 agents. An INS headquarters official said that INS expects to fall short of its fiscal year 1999 Border Patrol agent hiring goal by 600 to 800 agents. In addition, the administration’s fiscal year 2000 budget does not request any additional Border Patrol agent positions. In March 1999, the INS Commissioner testified that nearly 48 percent of the Border Patrol agents had less than 3 years of experience, and law enforcement experts had indicated that it is risky to allow an agency’s overall ratio of inexperienced to experienced officers to exceed 30 percent. Also, according to an INS official, INS lacks adequate facilities to support the increased number of agents along the southwest border. Therefore, according to INS, maintaining staffing at the fiscal year 1999 level will give INS time to develop more experienced agents and allow INS to allocate the funds it needs to improve facilities. The strategy also called for the Border Patrol to increase the amount of time agents spend on border enforcement activities, as opposed to program support activities--such as processing aliens who have been apprehended—supervision, and training. During fiscal year 1998, agents in the nine sectors along the southwest border collectively spent 66 percent of their total time on border enforcement activities, 6 percent more than the 60 percent spent in fiscal year 1997. Due to the increase in the number of on-board agents, INS also has increased the total amount of time agents spend on border enforcement activities. The fiscal year 1998 Priorities Implementation Plan set a goal that 8.1 million hours nationwide should be devoted to border enforcement activities. According to INS data, the Border Patrol spent about 9 million hours on border enforcement in fiscal year 1998, exceeding its goal by about 11 percent and representing a 32 percent increase over the 6.8 million hours spent on border enforcement in fiscal year 1997. The strategy called for “maximum utilization of lighting, fencing, and other barriers” to deter illegal entry. In our 1997 report, we stated that as of July 1997, INS had about 46 miles of fencing in place and another 23 miles under construction. A Border Patrol official told us that between August 1997 and September 1998, INS constructed about 18 miles of fencing in the Yuma, Tucson, and San Diego sectors. INS also built barriers between the ports of entry to prevent vehicles from driving across the border illegally. It is not clear how much additional fencing and other barriers INS plans to build. The House report accompanying INS’ fiscal year 1999 appropriations required INS to prepare a report by November 15, 1998, on its plans for road and fencing improvements along the New Mexico border. INS reported to Congress on February 12, 1999, that it is developing initial assessments of border fence and road improvements for New Mexico and other border areas for archaeological and other environmental considerations. INS expects to complete these assessments in the summer of 1999. According to an INS headquarters official, the final report will include an integrated plan for lighting and technology in urban corridors along the southwest border and will be phased in over a 3 to 5 year period. Two specific border projects are currently scheduled for fiscal year 1999 in New Mexico. Both projects were funded with military engineering support funds. With respect to automation and technology, INS received $47 million in fiscal year 1998 for increases in these areas at the border. Of these funds, INS spent $16.2 million to expand IDENT and ENFORCE, systems designed to track and identify illegal crossers and criminal aliens, $7.5 million to purchase 26 remote video surveillance systems, $3 million to upgrade its integrated sensor and mapping system, and $1 million to purchase additional sensors. INS also used its automation and technology funds for the Border Patrol to initiate, along with the Army Corps of Engineers, the Geographical Information Systems (GIS) project. The goal of the GIS project is to develop a computerized mapping system for the Border Patrol nationwide, adapting technology originally designed for military use. The Border Patrol anticipates that GIS will be used for such purposes as (1) displaying where apprehensions are made, (2) displaying where agents or ground sensors are deployed, (3) analyzing intelligence data, and (4) displaying the terrain agents will be patrolling to help ensure officer safety. The first of three phases of GIS—developing system requirements—began in January 1999 at a cost of $800,000. To identify the appropriate quantity and mix of personnel, equipment, and technology needed to control the border, in January 1999, INS headquarters was testing a Resource and Effectiveness Model designed to measure how changes in resources affect the Border Patrol’s effectiveness in apprehending illegal aliens and seizing narcotics. In fiscal years 1997 and 1998, INS spent approximately $1.37 million on contractor costs to develop the computer model, according to a Border Patrol official. At the time of our review, the model was not yet operational in any of the southwest border sectors. The Border Patrol official stated that INS plans to issue another contract to deploy the model to sectors at a cost of $700,000 in fiscal year 1999. the use of these technologies and how current operational doctrine would need to be adjusted to effectively utilize the information gathered with high technology systems” (H.R. 105-36 at 34). INS submitted its report to Congress on February 19, 1999. The model uses data such as the number of apprehensions; the amount of technology and equipment--such as lighting, fencing, and barriers--used to deter and detect aliens; and the number of agents. In addition, the model is to include an estimate of the number of aliens who eluded INS apprehension. The model is designed to help identify the appropriate mix of personnel and technology by measuring the impact that any changes in either of these factors would have on the level of effectiveness, with effectiveness defined as the proportion of the estimated number of illegal aliens who had entered the United States and were apprehended. We did not review the model; therefore, we cannot assess how well it is likely to measure the Border Patrol’s effectiveness. However, one of the factors in the model--an estimate of the number of aliens who eluded apprehension--historically has not been amenable to reliable measurement. The strategy postulated that increased enforcement between the ports of entry would cause an increase in port-of-entry activity, including increased attempts to enter through fraudulent means. Since March 1997, INS added 179 inspectors to ports along the southwest border to handle this anticipated increased activity, bringing its inspector staffing level to 1,454 as of September 1998, just short of the 1,485 inspectors that were authorized. These land ports of entry are under the jurisdiction of five INS district offices located along the southwest border. (See fig. 4 for the number of inspectors in southwest border districts.) According to an INS official, during fiscal year 1997, INS and Customs officials began discussing the level of staffing necessary to conduct primary inspections along the southwest border. As of April 1999, no staffing decisions had been made. Consequently, INS did not request any additional southwest land-border inspector positions in its fiscal year 1998 and 1999 budgets. According to INS’ fiscal year 1998 review of its Priorities and Performance Management Plan, at land-border crossings, INS Inspections has concentrated on increasing the use of technology to facilitate the entry of legal traffic into the United States. One such effort has been the construction of dedicated commuter lanes that use technology to automatically identify vehicles and validate the identity of occupants who have passed a preclearance process. The goal of the dedicated commuter lanes is to reduce the time it takes to complete an inspection at ports of entry by segregating high frequency, low-risk, prescreened travelers from other traffic. Construction delays prevented INS from adding two dedicated commuter lanes at the San Ysidro, CA, port of entry as originally planned. INS plans to complete these two lanes and a new lane in El Paso, TX, during fiscal year 1999. To increase enforcement efforts, southwest border ports continued activities such as using joint enforcement teams to inspect travelers and conducting multiagency cross-training, according to INS reports. To improve its effectiveness in deterring illegal entry, in July and August 1998, INS conducted a 2-month test of the Inspections Travelers’ Examinations (INTEX) process. INTEX consists of reinspecting a randomly selected number of travelers to determine if the primary inspector made the correct decision. The INTEX test included 10 air and 10 land ports of entry. Of the 3,511 travelers inspected during the INTEX test, 3,452 people, or 98 percent, were correctly admitted into the United States by the primary inspector. Primary inspectors incorrectly admitted 59 people or about 2 percent. According to an INS official, while the preliminary INTEX test was satisfactory, the sample was too small for INS to be able to project the results to the universe of nearly 500 million inspections INS conducts yearly. By the end of fiscal year 1999, INS plans to have conducted enough random inspections to be able to project the results. Contingent on INS’ appropriations, INS plans to expand INTEX to 65 additional ports in fiscal year 2000, bring the total number of ports using INTEX to 85 ports. INTEX is to be used to suggest how the inspection process can be improved as well as help INS comply with the Government Performance and Results Act of 1993, which requires agencies to establish systems for measuring program performance. As the strategy along the southwest border is carried out, the Attorney General has anticipated the following interim effects: (1) an initial increase in the number of illegal aliens apprehended in locations receiving an infusion of Border Patrol resources, followed by a decrease in apprehensions; (2) a shift in the flow of illegal alien traffic from sectors that traditionally accounted for most illegal immigration to other sectors; (3) increased attempts by aliens to enter the United States illegally at the ports of entry; (4) increased fees charged by alien smugglers and the use of more sophisticated smuggling tactics; (5) an eventual decrease in attempted reentries by illegal aliens who previously have been apprehended; and (6) reduced violence at the border. Although evaluative data continue to be limited, available data indicated that some of the anticipated effects continued to occur since our last report. INS’ apprehension data indicated a continued shift in illegal alien traffic from traditionally high illegal entry points to other areas as INS resources were deployed according to the planned approach. Such shifts in apprehensions have been associated with a change in the causes and locations of alien deaths along the border, leading INS to initiate a Border Safety Initiative in cooperation with the Mexican government. Inspectors at southwest border ports of entry apprehended an increased number of persons attempting fraudulent entry and, according to an INS report, smugglers in the Tucson sector were charging higher fees. However, data are inconclusive or lacking on certain key aspects of the strategy. For example, INS has not analyzed data on whether the strategy's prediction of an initial increase in apprehensions followed by a decrease, as resources are applied, has occurred in sectors receiving resources in phase II of the strategy. Further, data were unavailable on whether there has been a decrease in attempted reentries made by illegal aliens who previously have been apprehended. In addition, crime data being collected do not appear to be useful for gauging the strategy’s impact on reducing border violence. The strategy anticipated an initial increase in the number of apprehensions of illegal aliens in locations that had received an infusion of Border Patrol resources, followed by a decrease in the number of apprehensions when a “decisive level of resources” had been achieved, indicating that illegal aliens were being deterred from entering. INS had not defined the criteria for achieving a “decisive level of resources” in a particular area, so the timing of such changes in apprehension levels remains unclear. In addition, INS had not analyzed apprehension data over time to determine if the predicted pattern of increases followed by decreases had occurred in the phase II sectors that received resources in fiscal year 1998. Figures 5 through 7 present data on apprehensions by Border Patrol sector and strategy phase. It is difficult to determine the meaning of these numbers at this time, because INS is still implementing phase II of the strategy. Apprehension levels in fiscal years 1997 and 1998 in the two phase I sectors (San Diego and El Paso) were considerably lower than they were in fiscal year 1993. (See fig. 5.) In two of the phase II sectors (Tucson and Del Rio), apprehension levels increased in both fiscal years 1997 and 1998. In the other two phase II sectors (Laredo and McAllen), apprehension levels increased between fiscal years 1993 and 1997, then decreased in fiscal year 1998. However in fiscal year 1998, apprehension levels in these two sectors were still higher than in fiscal year 1993. (See fig. 6.) In two of the three phase III sectors (El Centro and Yuma), apprehension levels increased in both fiscal years 1997 and 1998, as compared with fiscal year 1993, whereas in Marfa apprehension levels have remained relatively constant during these 3 years. (See fig. 7.) The strategy also anticipated a shift in the flow of illegal alien traffic from sectors that had traditionally accounted for most illegal immigration activity to other sectors as well as shifts within sectors from urban areas, where the enforcement posture is greater, to rural areas. Our analysis of INS apprehension data indicated, since our previous report, that such a shift continued to occur. We found that apprehensions in San Diego and El Paso—sectors that had traditionally accounted for the most illegal alien traffic—decreased 9 percent, from 408,265 apprehensions in fiscal year 1997 to 373,127 apprehensions in fiscal year 1998. As a percentage of all southwest border apprehensions, apprehensions in El Paso and San Diego decreased from 68 percent in fiscal year 1993 to 30 percent in fiscal year 1997 to 24 percent in fiscal year 1998. (See fig. 8.) The percentage of southwest border apprehensions increased significantly in some sectors. For example, the Tucson sector’s percentage of all southwest border apprehensions increased from 8 percent in fiscal year 1993 to 26 percent in fiscal year 1998. Similarly, the percentage in the El Centro sector, east of San Diego, increased from 2 percent of all southwest border apprehensions to 15 percent over the same time period. Some data indicated that preventing illegal entry in certain traditional entry points along the southwest border and shifting illegal alien traffic to areas that are more remote and difficult to cross has resulted in an unanticipated effect--that is, a change in the causes and locations of the deaths of some illegal aliens who attempt to cross the border at these remote border areas. A 1998 University of Houston study estimated the number of undocumented migrant deaths at more than 1,600 between 1993 and 1997. Although the study did not find that the overall number of migrant deaths had increased significantly over the 5-year period, it concluded that the causes and locations of the deaths had changed markedly. Death from environmental causes, such as hypothermia and dehydration, increased in California and Texas, as did deaths from drowning in the All-American Canal in Imperial County, CA. Deaths from automobile/pedestrian accidents, homicides, and drowning in the San Diego area decreased. According to INS officials, reports of migrant deaths prompted the INS Commissioner to announce, in June 1998, a Border Safety Initiative designed to reduce injuries and prevent fatalities along the southwest border. INS developed the initiative in cooperation with the Mexican government and state and local officials in border communities to (1) prevent deaths and injuries by informing and warning potential illegal aliens of the realities and dangers of crossing the border at particular routes, (2) target search and rescue operations in hazardous areas, and (3) establish procedures and resources to help local officials identify the bodies of persons who have died while attempting to cross the border. INS developed a methodology to track migrant deaths in 40 counties that are contiguous to the border or have historically been known for migrant deaths due to routes of travel and environmental conditions. INS estimated that 254 migrants died while trying to cross the border in fiscal year 1998. INS was also developing a model to track Border Patrol rescues along the border, beginning in fiscal year 1999. The strategy postulated that there would be increased attempts by illegal aliens to enter the United States illegally at the ports of entry as it became more difficult to enter between the ports. No direct indicators of the number of illegal entry attempts currently exist. However, land ports of entry along the southwest border experienced a 17-percent increase in the number of fraudulent documents intercepted, from 70,155 in fiscal year 1997 to 82,101 in fiscal year 1998. These ports of entry also had a 4-percent increase in the number of false claims to United States citizenship, from 19,667 in fiscal year 1997 to 20,496 in fiscal year 1998. It is difficult to determine whether the increases in the number of fraudulent documents intercepted and false claims to U.S. citizenship were a result of actual increases in illegal entry attempts at the ports and/or a result of greater efforts made to detect fraud. As it became more difficult to cross the border illegally, INS anticipated an increase in fees charged by alien smugglers and the use of more sophisticated smuggling tactics. There is some evidence that these interim effects have occurred. For example, a January 1999 report by the Tucson Border Patrol sector indicates the cost of smuggling and the sophistication of smuggling techniques through that sector increased. According to this report, based on interviews with apprehended illegal aliens conducted by personnel from Tucson’s Anti-Smuggling Unit, the cost of being smuggled from the border to the interior of the United States had increased. For example, the cost of being smuggled 1,000 miles reportedly increased from about $1,000 in fiscal year 1996 to an estimated $1,350 in fiscal year 1998. At the same time, the Tucson report also stated that alien smugglers were using more sophisticated smuggling tactics. The report attributed these changes to the increase in Tucson sector personnel that resulted from the implementation of the border strategy. Currently, INS is expanding data collection on smuggling fees across the entire southwest border. In fiscal year 1998, the El Paso Intelligence Center collected baseline data for a 2-month period, on fees charged for smuggling Central American and Mexican aliens from the southwest border to secondary staging areas and final destination points or work locations. A summary of the findings in INS’ fiscal year 1998 Priorities and Performance Management Plan review stated that smuggling fees from border areas to various cities in the interior of the United States, such as New York and Los Angeles, ranged from $600 to $1,200. Although the review stated that “the quantity and quality of the data were not comprehensive,” INS intends to refine its data collection efforts in fiscal year 1999. INS officials also cited concerns that INS’ collection and analysis of intelligence data on alien smuggling is limited because some INS offices do not have full-time intelligence officers. The strategy postulated that there would be a decrease in recidivism--that is, in attempted reentries by illegal aliens who previously had been apprehended--as control was gained in particular locations. According to INS, this would be an indicator that the strategy was deterring illegal alien entry. INS planned to use IDENT, its automated fingerprinting system, to identify recidivists and analyze their crossing patterns along the southwest border. In our 1997 report, we stated that computer problems had affected the usefulness of IDENT data and INS’ ability to track recidivism over several years. At that time, INS officials told us that although IDENT data gathered since January 1996 were reliable and accurate, they had not done any analysis to examine trends in recidivism. In April 1999, INS officials told us that since IDENT began as a prototype in October 1994, several modifications have been made to the system’s hardware and software, which have resulted in improved matching and data accuracy. In addition, the proportion of apprehended aliens enrolled in IDENT has been increasing as more Border Patrol sectors have begun using IDENT. For example, during the fourth quarter of fiscal year 1998, 85 percent of the illegal aliens apprehended by southwest border sectors were enrolled in the IDENT system compared with 56 percent during the fourth quarter of fiscal year 1997. As a result, INS determined that IDENT data beginning in October 1997 were sufficiently complete and reliable for internal analysis. INS officials said that, as of April 1999, INS’ Statistics Branch was analyzing this more recent IDENT data. An INS official stated that the contractor currently evaluating the southwest border strategy for INS (see page 26 for a discussion of this evaluation) is also using recent IDENT data as part of its report. Data continue to be limited on the strategy’s effects on decreasing attempted reentries by illegal aliens. A March 1998 review of IDENT implementation on the southwest border by Justice’s OIG found that less than two-thirds of apprehended illegal aliens were being enrolled in IDENT. The OIG reported that, although IDENT and related biometrics technologies could be useful in many INS operations, “INS is not yet making consistent and effective use of IDENT as a tool for border enforcement.” The report said that (1) not all apprehended aliens were enrolled in IDENT, (2) INS was not entering the fingerprints of all deported aliens and known criminal aliens into the IDENT lookout database, and (3) INS needed to coordinate with the U.S. Attorneys for each district along the southwest border to establish a border enforcement and prosecution strategy that takes advantage of IDENT. The report also noted that “there were virtually no controls in place to ensure the quality of the data entered into the IDENT lookout database." In March 1999, an official with the Justice OIG told us that INS had satisfactorily responded to most of the report’s recommendations. For example, INS was entering a greater proportion of apprehended aliens into IDENT. To ensure deported and criminal aliens were in IDENT, INS established new procedures and criteria for placing individuals in the IDENT lookout database. INS instructed Border Patrol Sector chiefs to initiate contact with their local U.S. Attorney to inform them about the usefulness of IDENT in prosecuting recidivists and alien smugglers. Lastly, according to this official, INS added additional data integrity checks to ensure the accuracy of data entered into IDENT. INS plans to deploy IDENT systems nationwide. According to INS’ fiscal year 1998 Priorities and Performance Management Plan review, INS deployed IDENT at 194 additional locations in fiscal year 1998, exceeding its goal of 100 locations and bringing the total IDENT locations nationwide to 370. However, the use of IDENT has been uneven at these locations. During fiscal year 1998, the percentage of apprehended aliens enrolled in the IDENT system at locations nationwide varied from 17 to 90 percent, with an average of 85 percent, just short of INS’ targeted goal of 88 percent. Until IDENT is fully implemented, INS will not have complete estimates of the number of attempted reentries. The strategy anticipated a reduction in border violence as border control was achieved. INS officials told us that they anticipated that crime would decline in sections of the border where INS invested more enforcement resources. However, INS does not have data that would reliably measure the impact of the strategy’s implementation on border crime. During the first half of fiscal year 1998, the Border Patrol began contacting local law enforcement agencies in certain southern border locations to collect crime statistics to determine the impact that the national border control strategy has made on crime in target cities. The crime statistics from these locations identify crimes committed—such as homicide, rape, robbery, burglary—but not persons arrested or their immigration status. In its fourth quarter report on the 1998 Priorities, INS raised concerns about using these data as a measure of effectiveness because it could not determine the extent to which illegal aliens accounted for violent crimes along the border. Although the interim results of the strategy indicate that the strategy to date has made certain areas of the southwest border more difficult to breach, large numbers of illegal aliens continue to make their way into the United States. Given the intractability of the problem and the billions of dollars invested in border-control measures, it is important for INS to assess which aspects of the strategy are most effective. Similarly, if the strategy’s goals are not being achieved, INS should determine the reasons they are not. Thus, in our 1997 report we recommended that the Attorney General develop and implement a plan for a formal, cost-effective, systematic evaluation of the strategy. Pursuant to our recommendation, INS entered into agreements in September 1998 with three independent contractors to provide evaluative studies. The Executive Associate Commissioner for Policy and Planning wrote us that these agreements “will enable INS to develop a southwest border strategy evaluation and to initiate the analysis that fulfills these evaluation plans.” INS contracted with Advancia Corporation of Lawton, OK, to (1) design an evaluation strategy, (2) identify data needs and analytical approaches, and (3) conduct a study of the southwest border strategy. The contract is in the amount of $340,000 and the final report is due May 1, 1999. In April 1999, an official with INS’ Office of Policy and Planning said that the contractor was developing a formal analysis plan intended to assess the effectiveness of the southwest border strategy to date, as well as an evaluation design and analysis plan for continuing evaluation of the strategy. These results would, in part, be used to provide a baseline for future evaluation of the strategy. INS also contracted for $200,000 with CNA Corporation in Alexandria, VA, to study how illegal migration and alien and drug smuggling in the Caribbean affect the southern coast of the United States, including Puerto Rico. A final report on this project is due August 1, 1999. An additional contract for $60,000 was made with San Diego Dialogue, of the University of California, San Diego, to study issues related to the ports of entry. At the time of our review, this study was still under way. INS could provide us with no other information on the contractors’ progress. INS continued in fiscal year 1998 to implement its 1994 strategy by allocating additional personnel in accordance with the strategy, increasing the time Border Patrol agents spend on border enforcement activities, and attempting to identify the appropriate quantity and mix of technology and personnel needed to control the border. Data on the interim effects of the Attorney General’s strategy along the southwest border continue to be limited. The available data indicated that some of the changes anticipated by the strategy have occurred. For example, traditional routes of entry for illegal immigration, such as San Diego and El Paso, have shown significant declines in illegal alien apprehensions, while apprehensions in other areas have increased. While it does not appear that there has been an increase in the overall number of undocumented migrant deaths, some evidence exists that deaths resulting from attempted crossings in remote areas are increasing, which is an unintended consequence of the strategy. In addition, there is some evidence of increases in the number of attempted illegal entries at the ports of entry and increased smuggling fees. However, data are still lacking on some key aspects of the strategy, including the impact of the strategy on reducing attempted reentries of illegal aliens and reducing crime in border cities. As we recommended in our 1997 report, a comprehensive and systematic evaluation of the border strategy would go a long way towards providing information about the effectiveness of the strategy in reducing and deterring illegal entry. The evaluation studies that INS is funding, and INS’ plans to use findings from these studies as a baseline for future evaluation, could potentially begin to provide such needed information. However, information on these studies was too limited at this stage for us to assess whether they will provide the information needed to comprehensively and systematically evaluate the effectiveness of the strategy. On April 13, 1999, we obtained oral comments on a draft of this report from INS’ Assistant Director for Internal Audit and officials from the following INS offices: Border Patrol, Policy and Planning, Budget, General Counsel, Communications, and Inspections. INS officials gave us updated information on the independent testing of the IDENT database and INS plans for using IDENT to measure attempted reentries. We revised our draft to reflect that INS is now beginning to measure attempted reentries. INS officials generally agreed with the other information presented in this report. They also provided other technical comments that we incorporated into this report. We are sending copies of this report to The Honorable Janet Reno, Attorney General; The Honorable Doris Meissner, Commissioner of the Immigration and Naturalization Service; The Honorable Raymond Kelly, Commissioner of the Customs Service; The Honorable Jacob Lew, Director of the Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions concerning this report, please contact me on (202) 512-8777. This report was done under the direction of Evi L. Rezmovic, Assistant Director, Administration of Justice Issues. Other major contributors are listed in appendix II. Total authorized agents Authorized increases each year FY 98 FY 94 FY 95 FY 96 FY 97 FY 98 2,2150 378134 23456 1,023140 99845 15625 577135 676205 974260 7,2311,000 According to INS officials, the number of on-board agents as of September 30, 1993, is considered to be the fiscal year 1993 authorized Border Patrol staffing level for comparison purposes. Does not include eight agents deployed to Puerto Rico for the Attorney General's Anticrime initiative. INS did not collect data on these categories in fiscal year 1994. Michael P. Dino, Evaluator-In-Charge Tom Jessor, Senior Evaluator Nancy K. Kawahara, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO provided information on the Attorney General's strategy for reducing and deterring illegal entry along the southwest border, focusing on: (1) the Immigration and Naturalization Service's (INS) progress in implementing the southwest border strategy during fiscal year (FY) 1998; (2) interim results of the strategy; and (3) actions taken on GAO's recommendation that the Attorney General develop and implement a plan for formal, cost-effective, comprehensive, and systematic evaluation of the strategy. GAO noted that: (1) INS is continuing to implement its southwest border strategy; (2) although some of the expected interim results continue to occur, available data do not yet answer the fundamental question of how effective the strategy has been in preventing illegal entry; (3) in FY 1998, INS' Border Patrol transitioned into the second phase of its four-phased approach, which called for increasing Border Patrol agents and resources to sectors in Arizona and South Texas; (4) INS allocated 740 of 1,000 new agent positions authorized in FY 1998 to sectors in these locations; (5) INS also added 18 miles of fencing in California and Arizona, increased both the proportion and total amount of time Border Patrol agents at the southwest border spent collectively on border enforcement, and deployed additional technologies such as remote video surveillance cameras; (6) INS was testing a resource and effectiveness model to help it determine the right mix of staffing, equipment, and technology for all of its Border Patrol sectors; (7) to complement the Border Patrol's efforts between ports of entry, INS Inspections added 179 inspectors to southwest land-border ports of entry in FY 1998 and undertook training and enforcement efforts in conjunction with other agencies located at these ports; (8) INS also began testing an inspections program designed to measure how well it conducted inspections of travellers; (9) although evaluative data on the overall impact of the strategy continue to be limited, available data suggested that several anticipated interim effects of the strategy have occurred; (10) the southwest border ports of entry inspectors apprehended an increased number of persons attempting fraudulent entry and there were reports of higher fees being charged by smugglers, which INS said indicated an increased difficulty in illegal border crossing; (11) available information on the interim results of the strategy does not provide answers to the most fundamental questions surrounding the INS' enforcement efforts along the southwest border; (12) pursuant to GAO's 1997 report recommendation to conduct a comprehensive evaluation, INS contracted with private research firms in September 1998 for evaluative studies; (13) as of April 1999, according to INS, one contractor was working on an evaluation design and analysis plan; (14) INS could provide GAO with no other information on the contractor's progress; and (15) consequently, GAO does not know to what extent the contractor's evaluation plan will provide the information needed to determine the extent to which the Attorney General's strategy has been effective. |
The growing demand throughout the world for wildlife and wildlife parts and products has created a market in which commercial exploitation has threatened certain wildlife populations. The oriental medicine trade, for example, has created an illicit market in bear gall bladders, rhinoceros horns, and parts of other threatened and endangered species. The United States is the world’s largest wildlife trading country, importing an average of $773 million and exporting about $256 million in such trade each year since 1989. Although the full extent of illegal trade is not known, the value of illegal wildlife trade into and out of the United States is estimated to be between $100 million and $250 million annually. The mission of the Department of the Interior’s Fish and Wildlife Service (FWS) is to conserve and enhance fish and wildlife populations and their habitats for the continuing benefit of the American people. Enforcing and administering laws and treaties governing the importation and exportation of fish and wildlife species, the animals’ parts, and products made from the animals or their parts is an important and necessary means by which FWS carries out its mission. The FWS Division of Law Enforcement, through its wildlife inspection program established in 1975, helps ensure that wildlife shipments entering or leaving the United States comply with wildlife trade laws and treaties. The FWS Division of Law Enforcement, headquartered in Arlington, Virginia, provides general direction and develops policy for the seven FWS regions that oversee the wildlife inspection program. Each regional office is administered by a Regional Director, who is responsible for all of FWS’ activities within an assigned geographical area and who manages the inspection program with the help of an Assistant Regional Director for Law Enforcement. Trade in wildlife and wildlife parts and products generally involves shipments that consist of packages, crates, or other containers that are (1) transported by air, sea, and land carriers; (2) carried by individuals; or (3) delivered through the mail. To carry out its responsibilities to monitor trade in wildlife and intercept illegal shipments of federally protected wildlife, the Division of Law Enforcement maintains a force of 74 wildlife inspectors, whose duties include (1) examining documentation that accompanies shipments, (2) physically inspecting the contents of shipments, (3) properly handling seized property, (4) occasionally handling certain aspects of the violation investigation process, and (5) fulfilling administrative duties associated with the inspection and clearance of wildlife imports and exports. FWS inspectors are stationed at 11 designated ports of entry and at 14 of the over 300 nondesignated (border or special) ports located throughout the United States and its territories where wildlife shipments occur. By designating certain ports of entry for the importation and exportation of wildlife, FWS has attempted to concentrate wildlife shipments at a few locations to enable more efficient and effective service. The majority of wildlife shipments are processed through the 11 designated ports. Wildlife shipments processed through any of the nondesignated ports must meet certain criteria or be accompanied by a special FWS permit. FWS’ data show that nationwide, an average of almost 77,000 shipments were processed annually during the past 5 fiscal years. The FWS regions and the location of FWS wildlife inspectors at both designated and nondesignated ports are shown in figure 1.1. The Division of Law Enforcement also employs a force of about 225 special agents who are criminal investigators responsible for protecting domestic and international fish, wildlife, and plant resources. They maintain liaison with all mutually interested federal, state, and local enforcement authorities and investigate suspected violations of federal wildlife laws. As part of their broad responsibilities, these agents work closely with wildlife inspectors in enforcing and administering federal laws and international treaties governing the importation and exportation of wildlife and wildlife parts and products. The agents are located in field offices throughout each FWS region. In addition to its field inspectors and special agents, the Division of Law Enforcement maintains a desk officer for inspections in its Branch of Investigations. This desk officer is responsible for, among other things, (1) monitoring international wildlife trade to determine trends and (2) representing FWS in interagency negotiations and discussions to develop strategies for coordinated enforcement of FWS-administered laws and regulations. Funding for FWS’ wildlife inspection program is derived from two primary sources—annual appropriations and license and inspection fees collected from wildlife importers and exporters. Import-export licenses, which currently cost $125 annually, generated over $270,000 in fiscal year 1993. Persons who import or export less than $25,000 in wildlife annually, common carriers and museums that import or export wildlife for research or educational purposes, and certain others are exempt from the licensing requirements. The approximately 2,165 holders of import-export licenses must also pay a $25 inspection fee for each shipment that is imported or exported at a designated port of entry. These fees generated about $1.98 million in fiscal year 1993. Appropriations and user fees for the wildlife inspection program for fiscal years 1989 through 1993 are shown in table 1.1. As can be seen, in constant dollars over the 5-year period, appropriations have risen over 235 percent—partially as a result of moneys provided by the Congress for specific purposes, such as establishing designated ports in Portland, Oregon, and Baltimore, Maryland, and reestablishing a full-time wildlife inspector position in Philadelphia, Pennsylvania. User fees have remained relatively stable during this period. FWS relies on the cooperation of other federal agencies in fulfilling its mission of monitoring wildlife trade. The Department of the Treasury’s Customs Service is the primary agency responsible for the inspection and clearance of goods imported into the United States. In this capacity, the Customs Service is the first line of defense against illegal wildlife shipments. Before it clears a wildlife shipment at a designated port where an FWS wildlife inspector is present, Customs refers the shipment to FWS for inspection and clearance. At ports that do not have FWS inspectors, Customs inspectors can clear wildlife shipments or take other appropriate action. FWS also works closely with and coordinates its activities with other federal agencies that have jurisdiction at ports of entry. These agencies include the Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS), which is primarily responsible for inspecting shipments of plants and animals entering or leaving the United States and preventing the introduction of pests and plant and animal diseases into the United States; the Department of Commerce’s National Marine Fisheries Service, which is responsible for protecting certain marine mammals under the Marine Mammal Protection Act and other laws regulating the importation of marine wildlife; the Department of Justice’s Immigration and Naturalization Service, which is primarily responsible for enforcing the immigration laws of the United States; and the Department of Transportation’s Coast Guard, which works with other agencies to (1) enforce the laws that pertain to the protection of living and nonliving resources and (2) suppress smuggling and illicit drug trafficking on the high seas. FWS relies on the Endangered Species Act of 1973, as amended (16 U.S.C. 1531-1544), and the Lacey Act, as amended (18 U.S.C. 42; 16 U.S.C. 3371-3378), as the primary domestic legislation to control wildlife imports and exports. The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) is the major international agreement for the control of trade in wildlife and plants. In the United States, CITES is implemented through the Endangered Species Act. The United States also has treaties with four countries for the protection of migratory birds. The Migratory Bird Treaty Act, as amended (16 U.S.C. 703-712), which implemented these treaties, prohibits the importation of migratory birds captured or killed illegally in their country of origin. (See app. I for a brief description of these and related laws.) As discussed earlier in this chapter, FWS special agents work very closely with the wildife inspectors to enforce and administer federal laws and treaties governing the importation and exportation of wildlife and wildlife parts and products. In 1991, we reported on law enforcement activities conducted in six FWS regions by special agents. Our report stated that because the Division of Law Enforcement did not have reliable information on the effectiveness of its special agents’ enforcement efforts or the magnitude of suspected crimes against wildlife that were not being investigated, it had not fared well in the allocation of substantial increases in FWS’ overall staffing and funding. As a result, the Division’s special agents, who are responsible for investigating cases involving (1) large-scale selling or commercialization of wildlife and wildlife parts, (2) crimes against threatened or endangered species, (3) illegal importation of wildlife for commercial purposes, and (4) illegal taking of migratory birds, were unable—because of a lack of sufficient resources—to perform their basic responsibilities. To be better able to periodically assess the extent of crime against wildlife and justify its funding and staffing needs and ensure that its special agents are able to perform their basic responsibilities, we recommended that FWS record (1) all instances of suspected violations coming to its attention, including those that may not be investigated; (2) the agency’s handling of suspected violations; and (3) the outcomes of the investigations. We also recommended that FWS then use this information to (1) periodically assess the extent of suspected crimes against wildlife, (2) provide realistic estimates of staff and funds needed to adequately address the problem, and (3) include the estimates in annual budget requests. Although the agency agreed that it needed to improve its documentation of crimes against wildlife, FWS disagreed that better documentation of reported violations would provide meaningful data to justify increased funding or staffing for law enforcement. Because of the documented growth of illegal commercial trade in wildlife and wildlife parts and products and the similarities in the work locations of FWS and Customs Service inspectors, the Chairman of the House Committee on Merchant Marine and Fisheries and Representative Richard H. Lehman requested that we determine the (1) effectiveness of FWS’ wildlife inspection program, (2) potential impact of the North American Free Trade Agreement (NAFTA) on wildlife trade and the inspection of wildlife shipments, and (3) advantages and disadvantages of moving the wildlife inspection program from FWS to the Customs Service. We visited the designated ports of entry of Los Angeles, California, and Miami, Florida, and the nondesignated ports of entry of San Diego, California, and El Paso, Texas. Los Angeles and Miami are high-volume, worldwide import and export centers, and San Diego and El Paso are centers of import and export trade between the United States and Mexico. Our review focused on the field activities of FWS wildlife inspectors. We reviewed documentation on the activities of and the resources devoted to the wildlife inspection program for fiscal years 1989 through 1993, including information from the FWS Law Enforcement Management Information System (LEMIS). Although the information in LEMIS on the wildlife inspection program’s activities is the best information available, FWS officials told us that it is often inaccurate and incomplete and, therefore, may understate the total volume of imports and exports processed by FWS’ wildlife inspection program. We also obtained information on the program’s ability to deter illegal wildlife trade from FWS headquarters, its seven regional offices, and wildlife conservation organizations, including the Wildlife Management Institute, the National Audubon Society, the World Wildlife Fund, and the National Fish and Wildlife Foundation. To determine the potential impact of NAFTA on wildlife trade and the inspection of wildlife shipments, we reviewed documentation and available studies on the agreement’s requirements and possible outcomes. We also spoke with officials from FWS, APHIS, and the Centers for Disease Control’s Public Health Service as well as wildlife conservation groups about (1) the agreement’s impacts on the numbers and types of wildlife shipments that might be imported or exported and (2) their plans to deal with such impacts. To determine the advantages and disadvantages of moving the wildlife inspection program from FWS to the Customs Service, we interviewed FWS headquarters and regional officials and wildlife inspectors; Customs Service officials located at headquarters and the above designated and nondesignated ports; and APHIS and Public Health Service officials familiar with the program. We also obtained views on the advantages and disadvantages of such a transfer from wildlife conservation and trade organizations. As a part of our review, we administered a questionnaire to 72 of the 74 FWS wildlife inspectors and the wildlife desk officer assigned to the program at the end of fiscal year 1993 to obtain their perceptions of the program, the impact of NAFTA on their work, and the idea of moving the wildlife inspection program from FWS to Customs. One inspector left FWS prior to the mailing of our questionnaire, and we did not send a questionnaire to a wildlife inspector trainee who was hired in late fiscal year 1993. Sixty-three of the wildlife inspectors responded to our questionnaire, a copy of which—including a compilation of the wildlife inspectors’ responses—is provided in appendix II. For the same purpose, we also contacted 20 FWS special agents who were identified by senior resident agents (supervisors) as having conducted investigations resulting from violations detected by wildlife inspectors. Our review was conducted between February 1993 and November 1994 in accordance with generally accepted government auditing standards. We obtained written comments on a draft of this report from the Department of the Interior. These comments are summarized and evaluated in chapter 5 and reproduced in appendix III. We also discussed the contents of this report with officials in the Department of the Treasury’s Customs Service who provided several technical clarifications, which have been incorporated into the report. At a time of complex laws and regulations controlling wildlife trade and the possibility of increased shipments of imported and exported wildlife as nations evolve into a world economy, FWS is limited in its ability to monitor trade in wildlife and to intercept illegal imports and exports of wildlife. Under the current program, inspection rates at the FWS ports of entry vary considerably; a majority of wildlife shipments receive no physical inspection. As a result, many illegal imports or exports of wildlife may evade detection. Despite recent increases in funding for the wildlife inspection program, FWS officials report that many ports are without adequate wildlife inspection coverage, and inspectors at some locations cite a need for safety equipment and other resources, such as a better information system, reference books, and computer and related equipment, to more effectively perform their jobs. Wildlife inspectors are often kept from conducting physical inspections of wildlife shipments because administrative tasks limit the time they have available to conduct inspections. In addition, LEMIS does not provide accurate, timely information about the inspection program that would aid wildlife inspectors in carrying out their responsibilities and enable FWS officials to make informed decisions about any staffing or other changes needed at specific ports of entry. Furthermore, a lack of prosecutions, coupled with a lack of significant penalties and fines imposed for violations that are detected by the wildlife inspection program, do little to encourage compliance with wildlife trade laws and treaties. Because of higher priorities and staffing constraints within the Department of the Interior’s Office of the Solicitor and the Department of Justice’s U.S. Attorney Offices—the offices responsible for prosecuting wildlife trade violations—the most frequent punitive measure involves the forfeiture of the illegal wildlife the violators were attempting to move into or out of the United States. These violators tend to view such forfeitures simply as a cost of doing business. One of the ways FWS attempts to monitor the trade in wildlife and intercept illegal imports and exports of wildlife is by conducting physical inspections. However, most wildlife shipments are not physically inspected, and it is likely that many illegal wildlife shipments are evading detection. Although the full extent of the illegal trade in wildlife imports and exports is not known, such trade appears to be extensive as judged by various studies and other assessments. For example, in 1992, TRAFFIC USAestimated that global wildlife trade (excluding timber and fisheries products) was valued at a minimum of $5 billion to $8 billion per year and that as much as $2 billion of this trade may have been illegal. FWS law enforcement officials we contacted during our review pointed out that it is impossible to determine the extent of illegal wildlife trade that is occurring. However, they provided a rough estimate that FWS is detecting less than 10 percent of the violations associated with declared shipments (those presented to FWS for clearance) and that the percentage is much lower for undeclared shipments. Many FWS wildlife inspectors share these views. For example, 44 (about 70 percent) of the 63 inspectors who responded to our questionnaire believed that an illegal shipment would be able to escape detection over 50 percent of the time. The inspectors identified several means, including containerized shipments, passenger traffic at airports, and international mail, by which illegal shipments of wildlife and wildlife parts and products can be concealed and go undetected because of inadequate inspection coverage. Law enforcement officials from the seven FWS regional offices responsible for managing the wildlife inspection program agreed that FWS is detecting very little illegal wildlife trade. For example, one FWS supervisory special agent estimated that wildlife inspectors are detecting only about 1 to 3 percent of the illegal wildlife shipments carried by passengers and 1 to 10 percent of illegally imported or exported wildlife in declared cargo shipments. Undeclared illegal shipments of wildlife have an even higher probability of going undetected, according to FWS officials. Most wildlife shipments are not physically inspected, which is a problem that has been recognized by FWS and others for years. For example, the U.S. Fish and Wildlife Service Division of Law Enforcement Briefing Materials, 1991 Edition stated that FWS is able to inspect only a minute percentage of the containerized shipments that enter this country annually. In February 1992, the Director, TRAFFIC USA, testified before a congressional subcommittee that fewer than 5 percent of all wildlife shipments are physically inspected, leaving most wildlife imports and exports completely unchecked. Our analysis indicates that the percentage of wildlife shipments that FWS inspected during the 5-fiscal-year period from 1989 through 1993 averaged about 23 percent. As a monitoring and enforcement tool, physical inspections are important. Their purpose is to determine if the species and quantity of wildlife contained in a shipment are the same as those specified on its declaration documents. The wildlife inspectors base their decisions on how many shipments, and how much of any given shipment, to inspect on a variety of factors that include the amount of time they have available; number of shipments awaiting inspection; contents of the shipments—for example, live wildlife; violation histories relative to different types of shipments; violation histories of importers-exporters; and countries from or to which shipments are being made. Before fiscal year 1994, FWS’ wildlife inspection program had no established inspection goals. However, during the course of our review, the Division of Law Enforcement set what it considered to be attainable inspection goals for the program. These goals require that beginning in fiscal year 1994, FWS should physically inspect at least 25 percent of all shipments presented for import or export at the 11 designated ports of entry, and at least 50 percent of all shipments at nondesignated ports of entry where it has assigned wildlife inspectors. The difference in the goals for the two types of ports is primarily the result of the higher volume of shipments at designated versus nondesignated ports. According to the Deputy Chief of the Division of Law Enforcement, these goals are based on an average of the percentage of shipments that have been inspected nationwide over the past several years. Against this backdrop, figure 2.1 shows the total numbers of wildlife shipments processed and physically inspected by FWS over the 5-fiscal-year period from 1989 through 1993, along with the percentage of shipments inspected. The information shown in figure 2.1 and the other figures presented in this section are based on import-export data from LEMIS. As pointed out in chapter 1, the information in LEMIS on wildlife inspection activities is, according to FWS officials, often not accurate or timely and, therefore, more than likely understates the total number of imports and exports processed and physically inspected by FWS wildlife inspectors. The number of shipments processed during the 5-year period averaged about 77,000, with a range of 86,909 in fiscal year 1989 to 71,661 in fiscal year 1993. The number of shipments processed during the past 3 fiscal years has been fairly consistent, hovering close to 72,000. The overall rate of conducting physical inspections of shipments ranged from almost 20 percent in fiscal year 1991 to about 27 percent in fiscal year 1993. For the 5-year period, the inspection rate was about 23 percent. Reports on FWS’ wildlife inspection activities disclose, however, that the number of shipments that were physically inspected at each designated and nondesignated port of entry during the period varied considerably. Figure 2.2 shows the percentage of shipments that were physically inspected over the 5-fiscal-year period at each of the 11 ports of entry designated by FWS. The figure shows, for example, that 7 percent of the shipments processed in Miami, Florida, were inspected, in contrast to a 52-percent inspection rate in Honolulu, Hawaii. As the figure shows, the ports with the higher inspection rates are generally those that process the least number of shipments, and the ports with the lower inspection rates are generally those that process the most. Overall, the inspection rate for the 11 designated ports of entry was about 18 percent. Figure 2.3 shows the percentage of shipments that were physically inspected during the 5-fiscal-year period at 13 of the nondesignated ports of entry where FWS wildlife inspectors were assigned during the period. The rates of inspections at nondesignated ports vary even more over the same 5-fiscal-year period than did the rates at designated ports. For example, almost 10 percent of the shipments processed in Tampa, Florida, were physically inspected, while just over 80 percent of the shipments processed in San Juan, Puerto Rico, were inspected. As was the case with the designated ports, figure 2.3 shows that the nondesignated ports with the higher inspection rates are generally those with the least number of shipments—the nondesignated ports with the lower inspection rates are generally the ones that are the busiest. Overall, the inspection rate for the 13 nondesignated ports with assigned inspectors approached 40 percent. Not only do the number of shipments processed and the percentage of shipments inspected vary from port to port, the number of inspections performed per wildlife inspector varies as well. For example, the 5-fiscal-year average of the number of inspections performed per inspector at designated ports ranged from a low of 76 in Miami, Florida, to a high of 616 in Honolulu, Hawaii. For nondesignated ports, the average number of inspections performed per inspector per year ranged from a low of 30 in Denver, Colorado, to a high of 778 in El Paso, Texas. During the 5-fiscal-year period we examined, the inspection rates at over half of both the designated and nondesignated ports exceeded the recently established inspection rate goals. The variances in the inspection rates, the number of inspections performed per inspector, and the fact that some ports that processed fewer shipments exceeded the goals by significant margins suggest the possibility of an uneven allocation of resources within the program. When asked why such variances exist, the Deputy Chief of the Division of Law Enforcement could not identify the specific reasons for the variances, but he did tell us that it is sometimes simply a matter of the various ports giving inspections differing emphasis and priority. Furthermore, FWS regional officials told us that in addition to the inspection rates, they consider other factors in determining a port’s performance, such as the number and type of shipments processed, the extensiveness of the inspections that are performed, the number of seizures, and the effectiveness of relationships established with other inspection agencies. According to a number of sources, FWS’ wildlife inspection program is adversely affected by the limited number of inspectors and other resources that FWS is able to devote to the program. The problem stems from the fact that the wildlife inspection staff of 74 cannot adequately process and inspect the tens of thousands of wildlife shipments that flow through the ports annually. This situation has been discussed in a number of reports over the past several years. Reports issued by the FWS Law Enforcement Advisory Commission and the FWS Law Enforcement Functional Analysis Team in 1990 and 1991, respectively, each identified shortfalls in the staffing and funding devoted to the FWS’ wildlife inspection program as adversely affecting the agency’s ability to accomplish its overall inspection mission. The National Fish and Wildlife Foundation, in reports on FWS’ law enforcement efforts in 1988 and again in 1993, also identified inadequate program staffing as a problem. In addition, the Director, TRAFFIC USA, testified in February 1992 that the existing number of special agents and wildlife inspectors was wholly insufficient to effectively enforce U.S. wildlife laws and CITES requirements. As stated in chapter 1, user fees collected by FWS’ wildlife inspection program have remained relatively stable during fiscal years 1989 through 1993. Appropriations, on the other hand, have increased more than 235 percent during this period—rising in 1993 constant dollars from $.84 million to $2.81 million. A large portion of the increases in appropriations, however, was used to establish ports in Portland, Oregon, and Baltimore, Maryland; to reestablish a full-time inspector position in Philadelphia, Pennsylvania; and to increase the inspection force at the port of Los Angeles, California. FWS officials told us that the funding increase that was left did not keep pace with increases in the program’s salary and operating costs and that the inspection program continues to need additional staffing, funding, and other resources to be more effective. Although reports such as the ones just discussed have pointed out shortfalls in the staffing levels of the FWS inspection force over the past several years, the number of FWS inspectors has remained relatively constant. For example, in 1991 the Law Enforcement Functional Analysis Team identified a need for 30 additional inspectors and recommended a total inspection force of 100. However, despite increased funding for the program, only eight inspectors have been added to the wildlife inspection staff since 1989—increasing the number of inspectors from 66 in fiscal year 1989 to 74 in fiscal year 1993. Law enforcement officials we contacted at each of the seven FWS regions estimated that they collectively needed 43 additional inspectors to staff the ports of entry, resulting in a total inspection force of 117. Table 2.1 shows where FWS inspectors are currently located and identifies where regional officials believe they need additional inspectors. The ports of entry identified by the regional officials are generally ports (1) that FWS expects to show an increase in wildlife trade, (2) that have not met the recently established inspection goals, (3) that do not meet an FWS’ staffing criterion that calls for three inspectors per designated port, or (4) that currently have no assigned wildlife inspector. As pointed out in the various reports cited, the shortfalls in staffing affect FWS’ ability to conduct wildlife shipment inspections. This effect is illustrated in at least one instance when FWS increased its inspection staff at the Los Angeles, California, port of entry from 5 in fiscal year 1990 to 12 in fiscal year 1993. As a result of this increase, the number and percentage of inspections occurring at the Los Angeles port more than tripled during this period—increasing from 1,528, or almost 11 percent of the shipments processed in fiscal year 1990, to 4,974, or about 34 percent of the shipments processed in fiscal year 1993. FWS’ limited inspection workforce and budgetary restrictions on overtime mean that FWS ports of entry are frequently without wildlife inspection coverage. For example, FWS inspectors at the New York City, New York/Newark, New Jersey, port of entry, through which 29 percent of the nation’s known wildlife shipments pass, do not work late evenings, nights, or on weekends, unless a commercial broker reimburses FWS for the inspectors’ time. In addition, because of the limited number of staff, the majority of the nondesignated FWS ports do not have wildlife inspectors assigned. Eight of the ports identified by FWS regional officials as needing staff are nondesignated ports that currently have no inspectors. Even those nondesignated ports that do have assigned inspectors have very limited staff—usually one inspector—which leaves these ports with no wildlife inspection coverage when the inspectors are on leave, in training, or otherwise not on the job. When FWS inspectors are not present, FWS must rely on staff from Customs or another federal agency to clear any wildlife shipments received. Although FWS does receive such assistance, staff in these agencies have their own responsibilities and may often lack the expertise and/or inclination to vigorously pursue wildlife trade violations. Almost half of the wildlife inspectors responding to our questionnaire believe that without an FWS presence, illegal wildlife shipments are more likely to go undetected. Officials from the Customs Service and other federal agencies acknowledge that an FWS presence influences the degree to which they scrutinize, detain, and report suspicious wildlife shipments. This influence is illustrated by FWS’ experience at the San Diego, California, port of entry. Even though they cannot document the actual numbers of violations reported to them by other federal agencies before an FWS inspector was assigned to the San Diego port in 1986, FWS regional officials responsible for the port estimated the number to have been about 30 per year. Since assigning an FWS inspector to that port, the number of violations reported by FWS, including those detected by other federal agencies, has jumped to more than 300 per year, or tenfold. In fiscal year 1994, FWS—as a part of the federal government’s downsizing efforts—reduced by 21 the number of full-time-equivalent positions allocated to the Division of Law Enforcement. The Deputy Chief of the Division told us during our field work that because staffing decisions have generally been delegated to the FWS regional offices, he did not know the effect, if any, this reduction might have on the size of the wildlife inspection force. FWS has not, in all cases, provided its inspection staff with the basic equipment and other resources needed to effectively perform their jobs. In fiscal years 1993 and 1994, FWS funded the inspection program at a rate of $55,000 per inspector, an amount that was to pay each inspector’s salary and operating costs. According to figures provided to us by FWS regional officials, however, salary and benefit costs consume most of the funding provided and leave little to pay for such things as safety equipment (used when inspecting shipments of live wildlife), reference books, computers and related equipment, travel and transportation, office space, and uniforms. One port of entry reported to us that it is provided only $2,000 annually per inspector by its region to pay for such things as those listed above, and it estimated that double that amount was needed. Of the 63 inspectors responding to our questionnaire, 41 (65 percent) said that they did not have adequate resources with which to effectively do their work, and they identified safety equipment, reference books, administrative staff, and an improved management information system as the resources they most needed. Because of the way that some regions account for their costs, the Division of Law Enforcement has been unable to determine the level of funding needed by the wildlife inspection program. Regional officials we contacted during our review, however, provided us with estimates of the amount needed annually per inspector for both salary and operating costs; these estimates ranged from $60,000 to $80,000, depending on the region. Too little safety equipment and too few reference books have affected the ability of some FWS inspectors to perform inspections. Safety equipment is particularly needed when inspectors are to inspect live wildlife, which receive the highest priority for physical inspections. Live wildlife are sometimes carriers of transmittable diseases and/or capable of physically harming inspectors. Therefore, inspectors are instructed to handle live wildlife with care by avoiding direct physical contact, when possible, and by using safety equipment, such as breathing filters, eye protection, and gloves, when the handling of live wildlife is necessary. However, 22 (35 percent) of the 63 inspectors who responded to our questionnaire identified safety equipment as one of the resources they need, but do not have, to effectively perform their jobs. Wildlife inspectors we spoke with told us that such equipment is often not available for them to use and that they therefore often allow live wildlife shipments to pass through ill-equipped ports of entry without inspections by FWS. For example, according to the Assistant Regional Director for Law Enforcement in FWS Region 4, wildlife inspectors at the port of Miami, Florida, did not perform physical inspections of nonhuman primates—animals known to carry pathogens such as ebola and tuberculosis, which are dangerous to humans—because the safety equipment necessary to inspect the animals was not available. However, according to the preliminary results of an evaluation of the risks associated with such inspections, the region recently acquired the necessary safety equipment and began conducting physical inspections of such shipments in November 1994. To be able to identify whether a species is endangered or threatened is often crucial to a determination of the legality of a given shipment. However, 24 (38 percent) of the 63 respondents cited wildlife identification reference books as a resource that they need, but do not have, to effectively perform some inspections. Thirty-eight percent of the inspectors responding to our questionnaire also identified the need for administrative and support personnel to help them with their work. Regional officials told us that because of too few administrative and support personnel at many ports of entry, inspectors must perform administrative duties that keep them from physically inspecting more shipments. In fact, of the 63 respondents to our questionnaire, only 13 (21 percent) reported that they spend more than 50 percent of their time doing physical inspections; much of their remaining time is spent performing administrative duties, such as entering shipment data into LEMIS. These results are supported by the findings of an FWS wildlife inspector assigned to the port of Miami, who analyzed his work experience over a period of 4 months. The inspector found that 49 percent of his time was spent on administrative and telephone duties, making these duties the largest consumer of his time. Another 23 percent of his time was spent reviewing declarations and the associated paperwork, stamping FWS and Customs paperwork, and filing completed wildlife shipment entries. In contrast, the processing of seizures of illegal wildlife and physical inspections consumed only 6 percent and 4 percent of his time, respectively. In our 1991 report on the law enforcement activities of FWS special agents, we stated that the Division of Law Enforcement was at a disadvantage in the yearly competition within FWS for funding and staffing because it did not have the information it needed to develop good estimates of the magnitude of the problems faced by its special agents or the resources needed to address these problems. Our review of the wildlife inspection program—another component of the Division of Law Enforcement—shows that the lack of timely and accurate information continues to be a problem and that an improved management information system is needed that would enable (1) wildlife inspectors to more effectively perform their jobs and (2) FWS management to make more informed judgments about the program’s performance and resource needs. Of the inspectors responding to our questionnaire, 26 (41 percent) identified an improved management information system as a resource they need to effectively perform their jobs. FWS regional officials and inspectors alike told us that the existing system, LEMIS, does not provide timely and reliable information on the level of wildlife trade, violations detected, and the fines and penalties assessed for these violations. We were told that because of the inspectors’ workload, the entry of wildlife shipment and inspection data into LEMIS is often not timely or accurate, leading most likely to an understatement of the data that are in the system. Inspectors sometimes do not enter information into LEMIS for as long as 6 months after they process a shipment, and FWS does little or no quality check of the information entered in the system to ensure its accuracy. Many regional officials told us that LEMIS does not provide them with the information necessary to manage or evaluate the wildlife inspection program in terms of (1) wildlife trade trends; (2) traders who repeatedly violate wildlife laws and treaties; or (3) importers-exporters who “shop” the different ports, using those ports that they believe will give their shipments less scrutiny. According to some of these officials, the only reports readily available from LEMIS are monthly case management reports, which summarize and track suspected violation cases under investigation. Reports containing other information must be requested from headquarters and sometimes take months to receive. FWS recognizes that the quality of the LEMIS data on the wildlife inspection program needs improvement and has plans to improve the data’s timeliness and accuracy. For example, the Division of Law Enforcement is developing a central computer data entry office that will relieve wildlife inspectors from the duties of entering shipment data into LEMIS. They are also instituting several quality control measures, such as periodically comparing the data entered into LEMIS with the information contained on declarations, which are designed to ensure the accuracy of the information entered into the system. FWS has recently proposed revisions to its user-fee policies and rates that would provide additional funding for the program. According to FWS’ Final Report on Import/Export User Fees, which was issued in January 1993, these revisions, if implemented, would double the amount of fees currently generated by the wildlife inspection program and allow FWS to recover the full costs of services for all commercial import-export activities provided. The report acknowledged that more than half of the costs of the wildlife inspection program was funded from congressional appropriations. The report stated that the proposed increases in user fees would free the appropriated funds for activities such as increasing the number of special agents and wildlife inspectors and creating a central computer data entry office to enter wildlife import-export data into LEMIS. The Deputy Chief of the Division of Law Enforcement agreed that the funds from increasing user fees could be used to provide some resources needed by the wildlife inspection program, but he stated that FWS’ current staffing ceilings would prevent such funds from being used to increase the size of the inspection force. FWS officials have expressed concern about the lack of penalties and fines that are being assessed for violations detected by the wildlife inspection program. According to those we spoke with, this lack not only does little to instill in potential violators the need to voluntarily comply with the laws and treaties governing wildlife trade, but also fosters cynicism and low morale among the inspectors. They attributed the situation primarily to higher priorities and staffing constraints within the Department of the Interior’s Office of the Solicitor and the Department of Justice’s U.S. Attorney Offices, who are responsible for prosecuting wildlife trade violations. As a result, many violations detected by the program have resulted only in the abandonment or forfeiture of the wildlife and wildlife parts and products that were being illegally shipped. Some violators continue in business even after being found guilty. Even though federal statutes and implementing regulations provide that the importer-exporter licenses of those who violate wildlife laws and treaties can be suspended or, in the case of willful violations, revoked, such actions are rare. Violations detected by the wildlife inspectors are handled in several different ways. For minor violations, FWS may offer the importer or exporter the opportunity to voluntarily abandon the illegal wildlife that was being shipped. If accepted, FWS documents and closes the violation as an “abandonment case.” However, when FWS considers the violation to be more significant, it charges the individual with the violation and refers the case either to the Department of the Interior’s Office of the Solicitor for civil forfeiture and/or monetary penalty or to one of the Department of Justice’s U.S. Attorney Offices for criminal prosecution that could result in a fine and/or jail. FWS does not maintain a record of the nature and disposition of all violations of wildlife trade laws and treaties detected by its wildlife inspection program. However, we were able to obtain information relative to Endangered Species Act violations, which we were told make up a large percentage of the total violations detected by the inspection program. Our analysis of this information shows that of 1,760 individuals and companies charged with 2,896 endangered species violations that regional solicitors or the courts handled during fiscal years 1989 through 1993, only about 25 percent of the violators received penalties and/or fines for their infractions and a much lesser percentage received probation or jail. Over half of the violators simply had to forfeit the illegal wildlife they were attempting to ship into or out of the United States, while still another 20 percent had their cases dismissed by regional solicitors or the courts. Even repeat violators of the Endangered Species Act seldom received substantial fines or penalties or had other measures levied against them. For example, from fiscal year 1989 through 1993, FWS caught one importer 14 times attempting to illegally ship wildlife. However, the importer received no penalties or fines for the infractions; all the cases either resulted in the forfeiture of the illegal wildlife or were dismissed by the solicitor. Some FWS officials told us that even when penalties and fines are assessed, they are often reduced through legal maneuvers to the point where many importers-exporters view them simply as “a cost of doing business.” Although FWS officials can point to several successful detections of attempts to smuggle endangered or threatened species into or out of the United States, the officials also admit that most violators are not punished very harshly and most are allowed to remain in business despite their infractions. Although not all violations warrant such action, only one region could identify for us instances in which it had revoked or suspended import-export licenses as a result of violations. FWS officials pointed out that even if they revoke or suspend a license, oftentimes the violator will arrange to use the license of another company or individual, or will operate without one. One situation we became aware of in Florida involved a commercial importer who was convicted of illegally transporting a protected, endangered nonhuman primate. During our review, we were told that the individual was still involved in importation activities, operating under a license in his wife’s name. When we asked FWS law enforcement officials why this was allowed, they said that they were gathering the additional evidence needed to show that the individual was improperly involved in importation activities using this license. In mid-1994, FWS refused to renew the license on the basis of the additional evidence it had gathered. FWS officials told us that the degree to which solicitors and U.S. attorneys consider and prosecute the wildlife trade cases referred to them varies. Regional solicitors we contacted attributed the lack of penalties assessed for wildlife trade violations to a variety of reasons, including (1) the longer amount of time required to process a civil penalty versus a forfeiture only, (2) a lack of staff, (3) a lack of strong evidence, (4) higher priorities, or (5) the uncollectibility of any penalties that would be assessed. Those FWS officials who believed wildlife trade cases received little consideration and attention by the offices of U.S. Attorneys attributed these situations to either wildlife crime being a low priority or the limited staff available in these offices for such cases. The North American Free Trade Agreement, better known as NAFTA, is an agreement between the governments of the United States, Mexico, and Canada that is designed, in part, to eliminate the barriers to trade in, and facilitate the cross-border movement of, goods and services among the three countries and their territories. It was approved by the Congress on December 8, 1993 (P.L. 103-182). The consensus of those with whom we spoke and the studies we reviewed on NAFTA is that the trade agreement will result in an increase in the volume of cross-border traffic in most types of trade, including wildlife. According to an assessment of NAFTA by FWS, little recognition was given to the impact of the agreement on fish and wildlife. The three governments, however, did agree to uphold the provisions of CITES, which is the major international agreement for the control of trade in wildlife and plants. (See app. I.) Many of those we spoke with believe that the limited FWS inspection staff is already taxed in terms of its ability to monitor the trade in wildlife and wildlife parts and products that is occurring along the United States’ lengthy borders with Mexico and Canada and that NAFTA will likely exacerbate this problem. FWS has included estimates in its fiscal year 1995 budget request for increased funding to address NAFTA issues; however, these estimates address only the impact of NAFTA along the U.S.-Mexico border. They do not address the impact of NAFTA on the other FWS ports of entry, including those along the U.S.-Canada border. The United States is the world’s largest wildlife trading country, and its neighbors, Mexico and Canada, are two of its partners in wildlife trade. According to TRAFFIC USA, declared wildlife trade between the United States and Mexico reached almost $19 million in 1990. Among the items the United States imported were exotic skins and leather products, furs, animal curios (for example, stuffed specimens, claws, teeth, feathers), live animals and specimens, coral and shells, and live plants. Exports from the United States to Mexico included exotic skins and leather products, furs, trophies, and tropical fish. Declared wildlife trade between the United States and Canada exceeded $133 million in 1990. Although fur and fur products dominated such trade, significant quantities of live birds, reptiles, and fish; hunting trophies; exotic leathers and leather products; and live plants were also traded. Undeclared wildlife trade between the United States and both Mexico and Canada is also occurring. Despite FWS’ efforts to educate the public on what they can and cannot bring into or take out of the United States, tourists, hunters, and others continue to be involved in numerous attempts to carry illegal wildlife and wildlife parts and products across U.S. borders. Border port statistics show that many wildlife shipments are being made that have not been cleared for entry into or exit out of the United States. Because the FWS inspection force is limited, it is difficult for the agency to adequately monitor all of the trade crossing the U.S. borders with Mexico and Canada. In fact, FWS has inspectors stationed at only 7 of the 31 border ports. Of the 46 inspectors who identified areas needing inspection coverage in their responses to our questionnaire, 30 (65 percent) named ports along the Mexican and/or Canadian borders. Furthermore, most of the ports that do have inspection coverage are staffed by a relatively small inspection force, as illustrated by the following examples. The San Diego, California, port of entry’s area of jurisdiction includes land border stations at San Ysidro, Otay Mesa, Tecate, Calexico, and Andrade, along the entire California-Mexico border. This is an area covered by a Customs Service workforce that consisted of 473 inspectors and 60 canine enforcement officers as of May 1993. Until recently, one FWS wildlife inspector covered the same area. In fiscal year 1992, almost 570,000 trucks along with 27.3 million private vehicles and 21.3 million pedestrians were cleared through the San Diego area. When the FWS inspector is on leave, in training, or otherwise not at the port, the port and its various border stations have no FWS inspection coverage, and FWS must rely on Customs to clear wildlife shipments and to detect and detain any that may be illegal. The El Paso, Texas, port of entry’s area of jurisdiction also includes a number of land border stations. This is an area covered by a Customs Service force that, as of June 1993, consisted of 258 inspectors and administrative personnel and 34 canine officers. Also stationed at the various El Paso border stations were 24 inspectors from APHIS. Two FWS wildlife inspectors cover this same area. In fiscal year 1992, over 575,000 commercial vehicles, including trains, were cleared through the El Paso area along with 66.5 million passenger vehicles, including buses, and 7.6 million pedestrians. As is the case in San Diego, when the two FWS inspectors at El Paso are not at the port, FWS must rely on Customs to clear wildlife shipments and to detect and detain any that may be illegal. Although it is difficult to accurately assess the impact of NAFTA, at least two studies point toward likely increases in the flow of goods, including wildlife and wildlife parts and products, between the United States and both Mexico and Canada. For example, in a 1991 report entitled A North American Free Trade Agreement: The Impacts on Wildlife Trade, TRAFFIC USA concluded, among other things, that NAFTA is likely to increase wildlife trade; NAFTA will likely increase pressure to exploit North American wildlife NAFTA will increase wildlife trade monitoring and enforcement needs; and the U.S. government currently has no specific plans to increase its wildlife enforcement capability along the U.S.-Mexico border, despite the potential increase in wildlife trade under NAFTA. In a 1993 assessment of NAFTA entitled U.S. Fish and Wildlife Service Implementation of the North American Free Trade Agreement with Mexico, FWS concluded that the flow of goods between the United States and Mexico will increase under NAFTA and that the active illegal wildlife trade that already exists between the two countries will only increase as NAFTA is implemented. FWS also concluded that there are many areas of concern for the management of fish and wildlife resources and their habitats between the United States and Mexico and that, regardless of the effects of NAFTA, “there is a clear indication of the need to place greater emphasis and commitment of resources to address present responsibilities.” Many of the conservation groups we contacted and 68 percent of the wildlife inspectors responding to our questionnaire echoed these thoughts. The consensus was that NAFTA will increase the volume of wildlife trade and make the detection of illegal trade even more difficult. Even though the value of the wildlife traded between the United States and Canada is seven times greater than it is between the United States and Mexico, FWS officials believe that NAFTA’s impact on wildlife trade will be most pronounced along the U.S.-Mexico border. The officials believe that the opportunity for growth in wildlife trade is greater between the United States and Mexico than between the United States and Canada. They also recognize that while U.S.-Canada trade consists of primarily well-regulated commercial trade in furs and fur products, U.S.-Mexico trade is more problematic in terms of the wildlife species that are traded along the highly permeable U.S.-Mexico border. An FWS official told us that in mid-1993 FWS asked its regions, other than Region 2, to provide assessments of the impact of NAFTA on their operations. FWS did not ask Region 2 because it had already developed an assessment of NAFTA that FWS used as a basis for its 1993 assessment. According to the FWS official, none of the regions’ assessments had changed FWS’ thinking that the most significant impact of NAFTA on wildlife trade would occur along the U.S.-Mexico border. On the basis of its 1993 assessment of NAFTA, FWS included a request for a little over $10.8 million in funding in its fiscal year 1995 budget to address NAFTA’s impact on wildlife and its associated habitat along the U.S.-Mexico border. Of that amount, approximately $1.9 million would be designated for increased law enforcement, including the wildlife inspection program. Projected increases in staffing of wildlife inspectors at U.S.-Mexico border ports are shown in table 3.1. Although FWS expects NAFTA’s greatest impact to be along the U.S.-Mexico border and has requested funds to address that impact, FWS regional officials and inspectors we contacted during our review told us that NAFTA will also affect other ports, including those along the U.S.-Canada border and/or those involved in flights between the United States and Canada or Mexico. For example, Region 5 officials estimated that they need three additional wildlife inspectors in New York—one at Champlain and two at Buffalo—to handle normal wildlife traffic and the increase in trade expected from NAFTA. Law enforcement officials in Chicago, Illinois; Miami, Florida; New Orleans, Louisiana; and Atlanta, Georgia, also believe that NAFTA will increase the number of imports and exports processed at those ports. Despite these anticipated needs, FWS had no plans at the time of our review to increase either funding or staffing at any ports of entry other than those shown above—all of which, with the exception of Houston, are along the U.S.-Mexico border. We were requested to obtain the views of various officials on the possible transfer of the wildlife inspection program from FWS to the Customs Service. The basis for considering such a move involves either the perception or recognition that (1) the program is currently not working as well as it should and (2) the work of the FWS and Customs inspectors often involves the same ports of entry. We spoke with officials of FWS, the Customs Service, APHIS, the Public Health Service, and various wildlife conservation and trade organizations to obtain their views about the advantages and disadvantages of moving the wildlife inspection program from FWS to Customs. Several officials told us that the advantages of such a transfer would accrue primarily from Customs’ greater inspection resources—about 6,600 Customs inspectors nationwide compared with 74 FWS wildlife inspectors in a relatively few locations. However, others were concerned that the importance of the wildlife inspection program would be lost in such a transfer. In addition, the FWS wildlife inspectors, whose jobs and lives would be most affected by a transfer, generally were not in favor of it. If the wildlife inspection program were to be moved from FWS, the Customs Service—as the country’s first line of defense against illegal wildlife shipments—would be the logical agency in which to place it. FWS, because of its small inspection force, already relies upon cooperation with Customs in its efforts to accomplish its inspection mission. A number of officials from FWS, Customs, APHIS, and wildlife conservation and trade organizations that we spoke with cited two principal advantages that would result from a transfer of the wildlife inspection program from FWS to Customs. They believed that Customs’ larger, more dispersed inspection force and the automated system it has for assessing shipments and determining which ones to inspect would enable Customs to provide greater wildlife inspection coverage than does FWS. The 6,600 Customs inspectors are located at 300 ports of entry scattered throughout the United States and Puerto Rico. Several of the FWS and Customs officials we spoke with believe that Customs, with this larger, more widely dispersed inspection force, could provide greater wildlife inspection coverage than FWS with its 74 inspectors located at only 25 ports of entry. In conjunction with this much larger inspection force, according to a number of those we spoke with, Customs’ automated system to assess various shipments and determine which shipments to inspect and which ones to clear without inspections is better than FWS’ system, which relies on its inspectors to decide which shipments to physically inspect. Several of these officials noted that it is well known within the commercial trade sector that the inspection of various commodity shipments is more stringent at some ports of entry than at others. According to FWS officials, this knowledge prompts some shippers to engage in the practice of “port shopping,” whereby they route their shipments through those ports that have a reputation for allowing certain commodities to flow through them more freely. This practice is a particularly useful tool for shippers who have been detected violating wildlife laws and treaties. Because these shippers fear scrutiny at those ports where they have been identified as violators, they route their shipments elsewhere in the hope that less familiar FWS inspectors will allow their shipments to more easily pass through. FWS currently has no formal or efficient means to check for such occurrences. Although FWS had planned at one time to upgrade its management information system, thus enabling its inspection force to more clearly and readily identify the licenses of shippers with previous violations or restrictions, it has not yet done so. If the wildlife inspection program were to be transferred to Customs, these officials expect that Customs’ automated system for assessing shipments for inspections and the much larger presence of Customs inspectors at each port of entry would help reduce the practice of port shopping. In terms of the disadvantages that would come from a transfer of the inspection program, some officials from FWS, Customs, APHIS, the Public Health Service, and wildlife conservation and trade organizations, as well as some wildlife inspectors, were concerned that (1) Customs would not emphasize wildlife protection; (2) Customs inspectors lack wildlife identification expertise; (3) difficulties might arise in coordinating Customs’ inspection efforts with FWS’ efforts to protect wildlife, including the implementation of CITES—functions that would most likely remain at FWS; and (4) some costs would be incurred. The results from the discussions we held and the views of the inspectors responding to our questionnaire are included in the sections that follow. Although it is recognized that Customs has a larger inspection force than does FWS, among the concerns cited by those we contacted about transferring the wildlife inspection program to Customs was that Customs was not inclined to, nor would it ever, place much emphasis on wildlife trade or protection. Customs is responsible for protecting U.S. borders from imports that do not comply with trade laws and policies and from illegal smuggling activities, such as drug trafficking and money laundering. Its enforcement mission has grown increasingly more challenging over the years as the volume and value of imports has increased significantly, translating into a substantial increase in Customs’ workload. Two of our reports have dealt with the complex, challenging nature of Customs’ enforcement responsibilities and the problems Customs was experiencing in carrying out these responsibilities. For example, in our September 1992 report, we stated that the Customs Service could not adequately ensure that it was meeting its responsibilities to combat unfair foreign trade practices or protect the public from unsafe goods and that Customs was finding only a small percentage of the estimated violations in imported cargo. Our June 1994 report stated that Customs was operating in an extremely challenging environment, with a diverse mission that includes collecting duties, taxes, and fees on imports; enforcing laws intended to prevent unfair trade practices; and protecting public health by interdicting narcotics and other hazardous goods before they enter the country. Wildlife trade has not been a priority at Customs, according to Customs officials. Furthermore, results from the questionnaire we sent to all FWS inspectors disclosed that one of the primary reasons that shipments containing wildlife violations are currently slipping into and out of the United States is that, in the absence of FWS inspectors at given ports of entry at given times, inspectors from other agencies rank the detection of wildlife violations very low in comparison with their own inspection responsibilities. Even if the wildlife inspection function were transferred from FWS to Customs, some of the officials we spoke with believed that, because of Customs’ existing responsibilities and its heavy workload, the attention and emphasis Customs would be able to give to wildlife trade would never be very high. Officials at Customs headquarters were among those who held this belief. They did not look favorably on the idea of shifting the wildlife inspection program to their agency. The officials told us that Customs already has more work than it can accomplish and that a transfer of the wildlife inspection program, without some kind of an increase in Customs’ funding and staffing, would add to their burdens. Customs officials acknowledged that wildlife trade would not receive the emphasis in Customs that it is given in FWS. Some of those we spoke with cited the Customs Service inspectors’ lack of wildlife “expertise” as a major drawback when considering transferring the wildlife inspection function from FWS to Customs. Although Customs inspectors can do basic wildlife indentification, most of them lack the expertise that is necessary to make final species identification. Customs inspectors currently receive very limited training in the identification of illegal shipments of wildlife and wildlife parts and products. For the most part, the Customs inspectors’ orientation training devotes only about 2 hours to wildlife identification. Additional training is provided by FWS inspectors to Customs at certain locations, but only on a limited basis. Currently, while many Customs Service inspectors do possess college degrees, the degrees are not, according to Customs officials, in biology and related disciplines. FWS inspectors, on the other hand, do possess such degrees. For example, of the 63 FWS inspectors who responded to our questionnaire, 45 (71 percent) held bachelors or masters degrees, and many of them were in the fields of biology, wildlife sciences, and conservation. FWS wildlife inspectors and special agents currently work together to enforce wildlife laws and treaties. For example, once an inspector detects an illegal wildlife shipment, the responsibility for investigating the violation is turned over to a special agent. Many officials believe that this closeness between the inspectors and special agents would be adversely affected if the wildlife inspection function, without the special agents, were transferred to Customs. We contacted 20 FWS special agents to get their reactions to a possible shift of the inspection function to Customs. A number of these agents told us that Customs’ mission is very different from FWS’ mission and that wildlife trade would not be a priority within Customs. Furthermore, some of the agents told us that a transfer would likely lengthen the time it takes for them to perform an investigation and that greater coordination would be required between FWS and Customs. Others stated that they would probably be asked to conduct fewer investigations because Customs would likely want to use its own investigators, who, as some of the FWS agents pointed out, are not as versed in wildlife laws and treaties as they are. The FWS wildlife inspectors also work closely with the FWS Office of Management Authority, which is responsible for implementing CITES. This Office generally considers more than 4,700 applications each year for permits to engage in otherwise prohibited activities, such as the killing, taking, transporting, or trading of CITES-protected wildlife species. FWS inspectors ensure that all wildlife shipments entering the United States are accompanied by appropriate permits and are not in violation of CITES or various wildlife laws. FWS and Customs officials we spoke with believed that a transfer of the wildlife inspection function could complicate the coordination of activities between the wildlife inspectors and this Office, simply because two separate, distinct agencies could be involved rather than one. Although we are unaware of any cost-benefit analysis that has been done for a transfer of the wildlife inspection program, Customs headquarters officials we spoke with mentioned that some costs would be incurred from such a move. According to these officials, obvious costs would be incurred in preparing new work space for the inspectors and moving them and their associated equipment, furniture, and other items from their current locations to new ones. Costs would likely be incurred for various administrative activities associated with or resulting from the move, including those involved with planning, integrating accounting and management information systems, processing personnel matters, and printing. Costs associated with the cross-training of FWS and Customs inspectors would also be incurred. Additionally, if such a transfer took place, Customs headquarters officials told us that other less quantifiable outcomes would likely ensue, including (1) the disruption, instability, and loss of continuity in the wildlife inspection program during, and for a while after, the transfer; (2) a need to replace FWS inspectors who chose not to transfer; (3) uncertainty within the regulated import-export community; (4) delays in issuing regulations and operational guidance; and (5) a need to establish proper communication channels within Customs and between Customs and other federal inspection agencies for Customs’ new wildlife inspection responsibility. The questionnaire we sent to FWS wildlife inspectors and the wildlife desk officer included several questions on a possible move of the wildlife inspection program from FWS to Customs. Those responding to the questionnaire generally were not in favor of having the program and, in all likelihood, their jobs moved to Customs. We asked the inspectors if they thought the protection of wildlife would be enhanced if the inspection function were moved to Customs. Under a scenario in which all the FWS inspectors would be moved to Customs as a specialized, segregated unit, 16 of the 63 respondents (25 percent) strongly or somewhat agreed that wildlife protection would be enhanced, 35 (56 percent) somewhat or strongly disagreed, 8 (13 percent) neither agreed or disagreed, and 4 (6 percent) indicated that they had no basis to judge such a question. Under a scenario in which all the FWS inspectors would be moved to and absorbed into Customs without any emphasis on wildlife protection, only 2 respondents (3 percent) strongly or somewhat agreed that wildlife protection would be enhanced, 55 (87 percent) somewhat or strongly disagreed, 4 (6 percent) neither agreed nor disagreed, and 2 (3 percent) indicated that they had no basis to judge such a question. The results of the questionnaire indicated that a transfer of the inspection function would negatively affect the FWS inspectors’ morale. Also, the majority of the inspectors believed that their education and work experience is valued more at FWS than it would be at Customs. FWS’ wildlife inspection program was established almost 20 years ago to accomplish the dual mission of monitoring trade in wildlife and intercepting illegal imports and exports of wildlife. On the basis of our review, we believe that FWS has had difficulty in accomplishing either aspect of this mission. Under the current program, the number of shipments processed and the rate of inspections performed at FWS ports of entry vary considerably, and most wildlife shipments receive no physical inspection. As a result, many undetected illegal shipments of wildlife are thought to be occurring. Although it is impossible to precisely determine how much illegal trade in wildlife and wildlife parts and products is occurring or FWS’ impact on it, estimates are that FWS is detecting less than 10 percent of the violations associated with declared wildlife shipments (those presented to it for clearance) and a much lower percentage of the violations associated with undeclared shipments. The approval of NAFTA within the past year is likely to increase the volume of cross-border traffic among the United States, Mexico, and Canada, thus decreasing even further the chance of violations being detected. Moreover, many violations currently detected by the program result only in the abandonment or forfeiture of the wildlife or wildlife parts or products being illegally shipped, which does little to encourage compliance with wildlife trade laws and treaties. Despite recent increases in the wildlife inspection program’s appropriations, FWS and others largely attribute the program’s limited ability to accomplish its inspection mission to a need to hire more inspectors, as well as more administrative and support personnel, and to provide the inspectors with more resources, such as a better information system, safety equipment, reference books, computer and related equipment, travel and transportation, office space, and uniforms. However, given current budgetary constraints and downsizing efforts within the federal government, increased funding for the wildlife inspection program at any level of significance, in all likelihood, will not occur. Although recently proposed revisions in FWS’ user fees would provide additional funding for the program, current FWS staffing ceilings would prevent any of this increased funding from being used to increase the size of the inspection force. Furthermore, program data that reflect significant variances in inspection rates at the ports of entry and in the number of inspections performed by individual inspectors raise questions about the allocation of resources within the program. Before fiscal year 1994, FWS had not established any goals for its wildlife inspection program. The ones that were established in fiscal year 1994, while perhaps representing a start, do little to measure program performance or define what an effective inspection program should look like. Rather than establishing goals that are outcome-oriented and performance-related, FWS established inspection goals that it believed it could attain—physically inspecting at least 25 percent of all shipments presented for import or export at its 11 designated ports of entry and at least 50 percent of all such shipments at nondesignated ports of entry where it has assigned wildlife inspectors. These goals, however, do not take into account such things as the (1) types of shipments that are being processed, (2) extensiveness of the inspections performed, and (3) number of illegal shipments that are intercepted. Moreover, it must also be recognized that, in its efforts to achieve its inspection mission, FWS relies on the cooperation of other federal agencies, including Customs and APHIS, which assist in the detection of illegal wildlife shipments, and the Department of the Interior’s Office of the Solicitor and the Department of Justice, which handle the prosecution of the cases resulting from violations detected by the wildlife inspection program. As such, the achievement of any of FWS’ goals hinge, in part, on the degree of cooperation that FWS receives from these other agencies. Furthermore, any determinations as to whether the goals established are being achieved will have to be made using LEMIS data that are known to be inaccurate and incomplete—shortcomings that FWS recognizes and has plans to address. Without outcome-oriented, performance-related goals and an accurate management information system to report progress toward achieving them, FWS management and the Congress are hindered in making informed decisions about how well the inspection program is accomplishing its mission and about the level of staff and other resources needed by the program. We believe that it may take some time for FWS to develop outcome-oriented, performance-related goals and an accurate information system with which it can measure progress toward achieving such goals. In the meantime, variances that currently exist in such things as the number of shipments processed, number of inspectors assigned to the specific designated and nondesignated ports of entry, number of shipments inspected, and number of inspections performed per inspector at the various ports of entry suggest that FWS is not making the most efficient and effective use of its limited resources. A comprehensive examination by FWS of the size of the inspection staff and the level of accompanying resources that should be devoted to each port would help the agency to more clearly define an effective wildlife inspection program. FWS’ difficulties in accomplishing its inspection mission have caused some to suggest that other alternatives be explored, such as transferring the program to Customs, an agency with a larger, more widely dispersed inspection force. Those whom we contacted to obtain their views on this suggestion identified both advantages and disadvantages to such a move. For example, although the primary advantage of such a transfer is Customs’ larger inspection force, many of those we spoke with believed that because of Customs’ already heavy workload, a transfer could, in fact, diminish the attention now afforded wildlife shipments. If such a transfer is ever formally proposed, each of the advantages and disadvantages of moving the wildlife inspection program would have to be carefully considered by policymakers. To ensure that the wildlife inspection program is better able to accomplish its mission and that its current resources are more efficiently and effectively used, we recommend that the Secretary of the Interior direct the Director of FWS to: Develop outcome-oriented, performance-related goals that are indicative of an effective inspection program and take into account not only the number of shipments processed and inspected, but also such things as the extensiveness of the inspections performed and the results of those inspections. Give priority to the completion of FWS’ current plans to improve the timeliness, accuracy, and completeness of the information contained in LEMIS, including the information relating to (1) the levels and trends in wildlife trade; (2) port of entry inspection rates and inspector productivity; (3) results of inspections, including fines and penalties assessed; and (4) repeat wildlife trade violators. Conduct a comprehensive examination of the operations of each of the designated and the nondesignated ports of entry and the size and level of accompanying resources currently allocated to each of these ports, looking for ways in which the allocation of resources might be adjusted to respond to current needs at the specific ports and to improve the program’s overall efficiency and effectiveness. Identify the principal reasons for the lack of more frequent and effective pursuit of wildlife inspection program violations and, in conjunction with the Department of the Interior’s Office of the Solicitor and the Department of Justice, determine what measures can be taken, within existing resources and funding constraints, to make law enforcement efforts more efficient and effective. Proceed with plans to increase the user fees charged by the wildlife inspection program and apply the increased funding to those areas where resource needs have been identified. The Department of the Interior generally agreed with our recommendations to improve LEMIS, to proceed with plans to increase the fees charged for inspection services, and to develop outcome-oriented, performance-related goals that are indicative of an effective inspection program. Interior disagreed with our recommendation to examine the operations of the various ports of entry and look for ways in which the allocation of resources might be adjusted to respond to the current needs at the ports and to improve the program’s overall efficiency and effectiveness. It did indicate, however, that workload factors have been used to justify increased resources at several ports of entry (namely, Los Angeles, Chicago, and U.S.-Mexico border ports) and that they would continue to be considered. Interior pointed out that any decision to reallocate existing resources must also factor in political and economic considerations and the fact that FWS has determined that it must maintain a minimum staffing presence at its designated ports to provide uninterrupted service. We agree that such factors should be a part of any reallocation considerations. Our analysis indicates, however, that the variances among the different ports of entry in terms of such things as the number of (1) inspectors assigned, (2) shipments processed, and (3) shipments inspected are significant enough to warrant additional scrutiny by FWS of its allocation of resources to the wildlife inspection program. Interior also disagreed with a recommendation in our draft report to conduct a study of the penalties and fines assessed as a result of violations detected by the wildlife inspection program, and then meet with the Office of the Solicitor and the Department of Justice to identify ways in which the parties might work better together in catching and prosecuting those who violate wildlife trade laws and treaties. Interior stated that it was not apparent how a study of the penalties and fines is relevant to identifying ways in which the enforcement and prosecution parties can work better together. In our opinion, the number and dollar amounts of the penalties and fines assessed as a result of FWS’ wildlife inspection program is an important gauge of the effectiveness of the program. While it is unrealistic to expect every violation to result in a penalty or fine, we believe that FWS management should be concerned by the fact that few penalties and fines are currently being assessed for violations detected by the wildlife inspection program and that this lack of penalties and fines provides little or no deterrent to those who would otherwise be inclined to violate wildlife laws and treaties. Given the limited resources that FWS devotes to the wildlife inspection program, it is our opinion that FWS should attempt to get the most out of those resources. One such way would be for FWS—armed with basic data on the numbers of inspections conducted, violations detected, and penalties and fines assessed for these violations—to initiate discussions with the Office of the Solicitor and the Department of Justice that would seek to ensure a more efficient and effective wildlife inspection program. We have revised our recommendation to more specifically state the action that we believe is needed. Interior provided several technical clarifications, which we incorporated into the report as appropriate. Interior’s comments in their entirety and our responses are presented in appendix III. In addition to the written comments received from Interior, we discussed the contents of this report with officials in the Department of the Treasury’s Customs Service. We were told that the report clearly states the issues surrounding the wildlife inspection program, particularly in connection with the possible transfer of the inspection program, and that the report accurately states what Customs officials consider to be the advantages and disadvantages of such a transfer. We were told that although wildlife is not emphasized at Customs (because of other priorities), Customs, in a limited manner, does look at wildlife shipments and includes some of them in its automated cargo system, which is accessible to FWS inspectors. Additionally, we were provided with several technical clarifications, which we incorporated into the report. | Pursuant to a congressional request, GAO reviewed the Fish and Wildlife Service's (FWS) wildlife inspection program, focusing on the: (1) effectiveness of the wildlife inspection program; (2) potential impact of the North American Free Trade Agreement (NAFTA) on wildlife trade and wildlife shipment inspections; and (3) advantages and disadvantages that might accrue from transferring the inspection program to the Customs Service. GAO found that: (1) FWS has not fully met its wildlife inspection program mission of monitoring and intercepting illegal wildlife shipments despite recent budget increases; (2) the FWS inspection program needs more wildlife inspectors, safety equipment, and administrative support; (3) FWS does not have complete, accurate, and timely data on the inspection program; (4) FWS has proposed increasing user fees to produce additional program funding; (5) the government's failure to assess penalties, fines, and other punitive actions against violators does little to deter new offenses and lowers inspectors' morale; (6) budget cuts and downsizing efforts further jeopardize the program's inspection mission; (7) NAFTA is likely to increase wildlife trade among the treaty parties, which will increase the wildlife inspectors' workload; (8) FWS believes NAFTA will have the greatest impact at the Mexican border; (9) transferring the wildlife inspection program to Customs would provide greater inspection coverage due to Customs' larger, more dispersed inspection force and automated inspection system; and (10) the disadvantages of transferring the inspection program to Customs include Customs inspectors' lack of wildlife identification expertise, Customs' likely failure to emphasize wildlife inspection, difficulties in coordinating with FWS to protect endangered species, and potential increased costs. |
DOD’s organizational structure includes the Office of the Secretary of Defense, the Joint Chiefs of Staff, the military departments, numerous defense agencies and field activities, and various unified combatant commands that contribute to the oversight of DOD’s acquisition programs. Figure 1 provides a simplified depiction of DOD’s organizational structure. The Under Secretary of Defense for AT&L serves as the Defense Acquisition Executive and the Under Secretary has responsibility for oversight of MAIS acquisition programs. AT&L has policy and procedural authority for the defense acquisition system, which establishes the steps that DOD programs generally take as DOD plans, acquires, deploys, operates, and maintains its IT systems (discussed in more detail following this section). Additionally, AT&L is the principal acquisition official of the department and is the acquisition advisor to the Secretary of Defense. AT&L’s authority includes directing the military services and defense agencies on acquisition matters and making milestone decisions for MAIS programs. AT&L can delegate decision authority for MAIS programs to a component head who may further delegate the authority to the component acquisition executive. DOD’s CIO is the Principal Staff Assistant and senior IT advisor to the Secretary of Defense. This role includes overseeing many national security and defense business systems and managing information resources. The CIO coordinates with AT&L to develop and maintain a process for assessing and managing the risks related to the department’s IT acquisitions, including MAIS programs. The Department of Defense Instruction 5000.02 establishes policy for the management of all DOD acquisition programs. In January 2015, DOD updated these guidelines which outline the framework for MAIS programs. This framework consists of six models for acquiring and deploying a program, including two hybrid models that each describe how a program may be structured based on the type of product being acquired (e.g., software-intensive programs and hardware-intensive programs). A generic acquisition model that shows all of the program life-cycle phases and key decision points is shown in figure 2 and described following the figure. Materiel solution analysis: Refine the initial system solution (concept) and create a strategy for acquiring the solution. A decision—referred to as milestone A—is made at the end of this phase to authorize entry into the technology maturation and risk reduction phase. Technology maturation and risk: Determine the preferred technology solution and validate that it is affordable, satisfies program requirements, and has acceptable technical risk. A decision—referred to as milestone B—is made at the end of this phase to authorize entry of the program into the engineering and manufacturing development phase and award development contracts. An acquisition program baseline is first established at the milestone B decision point or at program initiation, whichever occurs later. A program’s first acquisition program baseline contains the original life-cycle cost estimate (which includes acquisition and operations and maintenance costs), the schedule estimate (which consists of major milestones and decision points), and performance parameters that were approved for that program by the milestone decision authority. The first acquisition program baseline is established after the program has refined user requirements and identified the most appropriate technology solution that demonstrates that it can meet users’ needs. Engineering and manufacturing development: Develop a system and demonstrate through testing that the system meets all program requirements. A decision—referred to as milestone C—is made during this phase to authorize entry of the system into the production and deployment phase or into limited deployment in support of operational testing. Production and deployment: Achieve an operational capability that meets program requirements, as verified through independent operational tests and evaluation, and implement the system at all applicable locations. Operations and support: Operationally sustain the system in the most cost-effective manner over its life cycle. MAIS programs enable DOD to organize, plan, direct and monitor important mission operations. As previously mentioned, MAIS programs must comply with certain annual and quarterly reporting requirements identified in statute. Each calendar year, DOD must submit to Congress budget justification documents on each MAIS program, including information on cost, schedule, and performance. Specifically, these programs must report, among other things, on the development and implementation schedules and total acquisition and full life-cycle cost estimates and provide a summary of the key performance parameters for each program. DOD must also provide a summary of any significant changes to information previously provided for each program. Moreover, on a quarterly basis, the program manager for each MAIS program is required to provide the senior DOD official responsible for the program a written report that identifies any variance in the program’s cost, schedule, or performance. Depending on the determination after reviewing the variances identified in the quarterly report, the senior DOD official responsible for the program must notify the congressional defense committees of any programs that have experienced either a significant or critical change. During our review, MAIS programs were required to comply with the following reporting requirements: Significant change. A significant change must be declared if a program experienced a schedule delay of more than 6 months but less than a year; estimated total acquisition or full life-cycle cost for the program has increased by at least 15 percent but less than 25 percent; or there has been a significant adverse change in the expected performance of the system. If such an event occurs, the senior DOD official responsible for the program must notify the congressional defense committees in writing no later than 45 days after receiving the quarterly report from the program manager. Critical change. A critical change must be declared if a program failed to achieve a full deployment decision within 5 years after the milestone A decision or, if there was no milestone A decision, the date when the preferred alternative was selected for the program; experienced a schedule delay of 1 year or more; experienced an estimated total acquisition or full life-cycle cost increase of 25 percent or more over the original estimate; or experienced a change in the expected performance of the system that will undermine the ability of the system to perform as intended. If such an event occurs, the senior DOD official responsible for the program must carry out an evaluation and submit a critical change report to the congressional defense committees no later than 60 days after receiving the quarterly report. Since the December 19, 2014, enactment of the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015, MAIS programs are now required to declare a significant change— instead of a critical change—if they fail to achieve a full deployment decision within 5 years after the milestone A decision, the date when the preferred alternative was selected for the program (excluding any time during which program activity is delayed as a result of a bid protest). More recently, the National Defense Authorization Act for Fiscal Year 2016 directed the Secretary of Defense to issue guidance for MAIS programs to establish an acquisition baseline within 2 years after program initiation. This statute provides a response to a recommendation we made in our last annual report on MAIS programs. In particular, we found that these programs spent, on average, more than 5 years and $450 million prior to establishing baselines. We noted that programs that have not established baselines were subject to less oversight and could not be measured against cost, schedule, and performance targets. Also, the propensity to carry out MAIS programs for multiple years prior to committing to baselines is inconsistent with incremental and rapid development as called for in federal law and GAO’s IT management best practices. Accordingly, we recommended that these programs be baselined within 2 years; for which DOD partially concurred. We maintained that establishing baselines within 2 years would improve outcomes and increase accountability. DOD’s CIO, along with other agencies, must report on the progress of its IT investments, including MAIS programs, on a public website known as the IT Dashboard. OMB established this website in June 2009 to improve the transparency and oversight of agencies’ investments. The Dashboard visually displays federal agencies’ cost, schedule, and performance data for over 700 major federal investments at 26 federal agencies. It also includes a risk rating that is to be performed by agency CIOs. According to OMB, these data are intended to provide a near-real- time perspective on the performance of these investments. The public display of agency data is intended to allow OMB; other oversight bodies, including Congress; and the general public to hold federal agencies accountable for their progress and results. In August 2011, OMB issued guidance that stated, among other things, that agency CIO’s shall be held accountable for the performance of IT investments. The Dashboard presents performance ratings for individual investments using metrics that OMB has defined—cost, schedule, and CIO evaluation. If OMB or the agency CIO determine the reported data is not timely or reliable, the CIO must notify OMB and establish within 30 days of this determination an improvement program and the progress the agency is making. According to OMB, the addition of CIO names and photos on the website is intended to highlight this accountability and link the Dashboard’s reporting on investment performance. In order to enhance transparency and improve risk management of federal IT acquisitions, Congress codified the Dashboard reporting process through key provisions, known as the Federal Information Technology Acquisition Reform provisions in the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015. Entities such as the Project Management Institute, the Software Engineering Institute at Carnegie Mellon University, and GAO have developed and identified best practices to help guide organizations to effectively plan and manage their acquisitions of major IT systems, such as MAIS programs. Our prior reviews have shown that properly applying such practices can significantly increase the likelihood of delivering promised system capabilities on time and within budget. These practices include, but are not limited to: Requirements management: Requirements establish what the system is to do, how well it is to do it, and how it is to interact with other systems. Appropriate requirements development involves eliciting and developing customer and stakeholder requirements, and analyzing them to ensure that they will meet users’ needs and expectations. It also consists of validating requirements as the system is being developed to ensure that the final systems to be deployed will perform as intended in an operational environment. Risk management: A process for anticipating problems and taking appropriate steps to mitigate risks and minimize their impact on program commitments. It involves identifying and documenting risks, categorizing them based on their estimated impact, prioritizing them, developing risk mitigation strategies, and tracking progress in executing the strategies. According to statute, for programs that declare a critical change, the report that is submitted to Congress must include a written certification stating that: the automated information system or IT investment to be acquired is essential to the national security or to the efficient management of the DOD; there is no alternative to the system or IT investment which will provide equal or greater capability at less cost; the new estimates of the costs, schedule, and performance parameters have been determined, with the concurrence of the Director of Cost Assessment and Program Evaluation, to be reasonable; and the management structure for the program is adequate to manage and control program costs. All 18 MAIS critical change reports in our review contained the required elements. In addition, this report must be prepared and submitted to Congress no later than 60 days after the senior DOD official responsible for the program receives the quarterly report from the program manager that leads to the determination that a critical change event has occurred. Programs that do not submit the report to Congress within the 60-day period are statutorily prohibited from obligating appropriated funds for any major contract until the date that Congress receives the report. Further, if a MAIS program violates the statutory prohibition against obligations, it will also violate the Antideficiency Act. This act prohibits an officer or employee of the United States government from making or authorizing an expenditure or obligation in excess of or in advance of available appropriations. The Antideficiency Act also requires that an appropriation must be available for an agency to incur an obligation. Thus, if DOD incurs an obligation against an appropriation that is not legally available, the department has violated the act. Violating the Antideficiency Act would require the Secretary of Defense to immediately report to the President and Congress all relevant facts and a statement of actions taken. Of the 18 MAIS programs experiencing a critical change, most exceeded the 60-day reporting requirement, several by a substantial amount. Specifically, 16 exceeded the 60-day reporting requirement and 10 of those programs took over 100 days to report. Of the 10 programs, 5 of the programs took over 200 days to report. Two programs—Teleport Gen I/II and General Fund Enterprise Business Systems—delivered their reports to Congress within the 60-day requirement. Figure 3 shows the extent that programs met or exceeded the 60-day reporting requirement. Officials from several programs provided various reasons for why they delayed submitting their critical change reports. For example: Mission Planning Systems #2 submitted a report to the Office of the Secretary of Defense on time but it took 29 days to transmit the package to Congress and it was 3 days late. CAC2S program was late because of the need to conduct an independent assessment that was directed by DOD. The Expeditionary Combat Support System program had delays due to changes that were made to the size and complexity of its originally scoped effort and contracting process that, in turn, required additional updates. The update triggered a process to re-evaluate the revised strategy. According to a DOD AT&L official, 60 days is too short to perform a program evaluation and achieve all the coordination necessary for an important communication with Congress. The official also said the addition of the Office of Cost Assessment and Program Evaluation requirement to review and approve the reports, as mandated by the Weapon Systems Acquisition Reform Act of 2009, consumes much of the 60-day allotment time. A Cost Assessment and Program Evaluation official noted that reviews often exceed the 60-day period because, most notably, the significant amount of time needed to collect and develop comprehensive information used to determine a program’s cost. The official added that DOD is working to strengthen the data collection efforts to improve the ability of the Office of Cost Assessment and Program Evaluation to complete its evaluation such as reviewing the basis for the revised cost and schedule estimates. However, the official noted that there has been no overall evaluation or study on the cause for delays. While it may be possible that 60 days is too short of a time frame for submitting reports, without understanding the cause for the delays DOD is not in a position to state what time frame would be feasible. Further, the fact that two programs were able to submit reports in a timely manner suggests that 60 days is achievable. Until DOD ascertains the cause for the delays and implements corrective actions, the reports may continue to be delivered in a manner which may impact the timeliness of information considered by Congress in making oversight and funding decisions for MAIS programs. This may also affect budget and other strategic decisions on how and what programs to prioritize. Further, DOD does not have a mechanism to monitor and ensure that MAIS programs with late reports were restricted as required by law from obligating funds on major contracts prior to Congress receiving the report. This mechanism is especially important because of the potential for violating the Antideficiency Act. Although DOD states in its guidance that program managers should not obligate any funds during the entire period in which the report is being prepared, DOD does not currently have a way to monitor this. According to a DOD AT&L official, DOD is not required by statute, regulation, or guidance to collect the information for monitoring purposes. However, our guidance on internal controls for federal agencies states that agency management should establish a baseline to monitor the current state of a control system. Once established, management should monitor the agencies’ internal control system through ongoing monitoring and separate evaluations DOD does not have a management internal control to monitor the system and evaluate whether programs are complying with the DOD guidance. Instead, DOD relies on the programs to act in accordance with the law. This official said the need to obligate funds on major contracts should be a driver for programs to submit their reports to Congress as expeditiously as possible. However, with so many programs submitting the critical change reports well after the 60-day period, there is a risk that programs could potentially violate the prohibition on obligations, and thus, in addition, the Antideficiency Act. The extent to which the three selected MAIS programs in our study experienced changes in their cost and schedule estimates and met performance targets varied. Specifically, the Army and Air Force programs experienced slight changes in their cost and schedule estimates, while the Navy program experienced more significant changes. In addition, only one program, Air Force’s DEAMS, did not fully meet its technical performance targets. Table 1 provides a status of the cost, schedule changes, and the results of technical performance targets for the programs. See appendix II for the detailed profiles of each program. As of January 2016, the latest life-cycle cost estimate for the Army’s TMC program had increased about 19 percent from the program’s February 2008 acquisition program baseline estimate (from approximately $1.97 billion up to $2.34 billion). Program officials attributed the cost increase to a breach in the research and development testing and evaluation cost estimation that was reported to Congress. The program’s estimated program development cost increased by 45 percent over the original acquisition program baseline due to program scope changes derived from the realignment of certain missions, such as the endorsement of the Command Post of the Future as a foundation for mission command. As of January 2016, TMC program experienced a 3-month slippage in its full deployment date, currently scheduled for December 2018. The slippage was within the program’s pre-established threshold allowance to account for minor changes in schedule and, program officials stated that this slippage was considered to be a low risk that the program has accepted. Program officials stated that, although the Command Post of the Future product is 95 percent fielded and is on schedule to reach full deployment by December 2018, continued support of the Command Post Computing Environment is needed beyond fiscal year 2019. As of January 2016, the TMC program met all three of its key performance targets, which include supporting net-centric military operations, disseminating orders with future Army and Joint Command and Control Systems, and displaying unified information on subject matters. As of October 2015, the latest life-cycle cost estimate for Navy’s CAC2S Increment 1 program had increased about 477 percent from its first acquisition program baseline estimate (from approximately $347 million up to $2 billion). As previously reported, factors attributed to the program’s early developmental challenges contributed to an increase in its cost estimate, which include program scope growth and restructuring. According to program documentation, despite the program’s initial challenges in its cost estimates due to an increase in its operations and support expenditures, the CAC2S program has demonstrated gradual improvement as it reported a cost avoidance of $54.4 million for implementing the DOD Better Buying Power initiatives that befitted from competitive market forces that drove down cost. As of October 2015, the program’s latest life-cycle estimate relative to its November 2010 production acquisition program baseline cost estimate had decreased by about 19 percent (from approximately $2.46 billion down to $2 billion). As of October 2015, compared to its first acquisition program baseline schedule, the program experienced a 13 year and 9 month slippage in its full deployment date—currently scheduled for March 2022. As previously reported, factors that attributed to the prior schedule slippage included the addition of new requirements and program restructuring. However, program officials stated that the program has been executing in accordance within its approved schedule. As of October 2015, the estimated milestone C phase 2 was delayed by 6 months from the program’s production acquisition program baseline but achieved it milestone approved within the program’s pre-established schedule threshold. Program officials attributed this delay, in part, to administrative factors, which included the review and approval process. CAC2S successfully achieved milestone C phase 2 approval in February 2015 but the acquisition decision memorandum had not been signed until March 2015. As of October 2015, program officials reported that, during performance testing, it was meeting both of its key performance targets related to net- ready and data fusion. As of October 2015, the latest life-cycle cost estimate for the Air Force’s DEAMS program had increased about 9 percent from its first February 2012 acquisition program baseline estimate (from approximately $1.43 billion up to $1.56 billion). Program officials attributed the cost increase, in part, to program scope growth and the addition of software upgrade enhancements. Specifically, as of October 2015, the program’s life-cycle cost estimate incorporated additional infrastructure maintenance costs throughout the life-cycle that added performance monitoring and additional deployment support. Also, according to program officials, the program brought forward increment 2 requirements and a second Oracle software upgrade in year 2021. DEAMS experienced a 6 month slip in its milestone C but was within its threshold, a predefined point where programs that exceed it are at increased risk. However, it did experience a 1 year slip in its full deployment decision date—currently scheduled for February 2016. Program officials attributed this slippage due to findings identified DEAMS’s initial operational test and evaluation report. While the program had been established since August 2003, a full deployment date had not been determined. As of September 2015, program officials expect full deployment to be reached by October 2016. As of October 2015, DEAMS program officials reported that the program did not meet all of its nine key performance targets. Specifically, DEAMS did not meet five performance targets: Balance with Treasury, Accurate Balance of Available Funds, Timely Reporting, Period-End Processing, and Net-Ready. For example, the two operational assessments that were conducted from 2012 to 2014 identified significant weaknesses in three measures of effectiveness and suitability. Further, the Initial Operational Test and Evaluation report identified system performance issues within the DEAMS program, which included change management issues, transaction backlogs, and ineffective reporting tools. Subsequently, the Air Force Operational Test and Evaluation Center provided 29 recommendations for the Air Force to implement to support the successful fielding of DEAMS Increment 1, 17 of which were documented as being completed, while corrective action for the remaining 12 are still underway. However, according to program documentation, DEAMS must demonstrate measureable improvement by the full deployment decision date of February 2016, in order to avert future schedule delays in its fielding deployment. According to the Software Engineering Institute’s Capability Maturity Model Integration for Acquisition, an appropriate requirements management involves establishing an agreed-upon set of requirements, ensuring traceability between requirements and work products, and managing any changes to the requirements in collaboration with stakeholders. Likewise, an effective risk management process identifies potential problems before they occur, so that risk-handling activities may be planned and invoked, as needed, across the life of the project in order to mitigate the potential for adverse impacts. Table 2 provides key practices used to comprehensively manage requirements and risk. All three selected programs implemented IT acquisition best practices for risk management, but requirements management best practices were not consistently implemented by the programs. Table 3 provides a summary of the extent to which requirements and risk management best practices were implemented by each program. The Army implemented all risk management best practices for the TMC program. For example, the risk management plan, dated May 2014, identified risk sources to include, among other things, unclear system requirements, immature technology, and an unstable organizational environment. In addition, program officials analyzed, categorized, and controlled risks using a probability and impact model that considered the risks (from very low to very high) and the potential consequences. The program also used a risk radar tool to track and monitor risks. These are reviewed weekly during staff meetings and updated monthly. Further, TMC’s risk management plan indicated that contingency plans are invoked whenever adjustments to cost, schedule, or performance are required. In taking these and other actions, the TMC program had established and utilized the key risk management practices. Doing so should better position the program to mitigate adverse impacts from potential problems before they occur. The Army had implemented three requirements management best practices for the program, but did not fully implement two: the practice of managing requirements changes and ensuring that work products are in alignment with requirements. For example, one key practice that the program implemented included the maintenance of the bidirectional traceability tool among requirement. Specifically, program officials utilized a traceability tool used to generate a requirements matrix to track all of its program elements to the requirements. Regarding managing requirements changes, while program officials tracked requirements changes in a database, requirements changes were not always available at the stakeholder level to evaluate the impact and determine the status of requirements changes for all elements of the program. Even though TMC was in the production phase, it relied solely on the functions within its requirements database rather than a requirements management document. This left the program without a formal mechanism to track and ensure project plans, activities, and work products are consistent with defined requirements. Without such a document, the program does not have a formal mechanism to track and ensure that project plans, activities, and work products are consistent with defined requirements. The Navy implemented all risk management best practices for the CAC2S program. For example, the risk management plan assessed risks in terms of their probability and consequence of occurrence. The program also identified and documented risks and had the supporting documentation that included risk and issue register logs, detailed reports, the integrated master schedules, and risk assessments. In addition, the risk management plan established the strategy that included the processes to guide risk mitigation efforts at the lowest, appropriate level. The program’s risk registers, which included risk mitigation steps, were provided by program officials to demonstrate that risk mitigation plans had been implemented for each risk. In taking these and other actions, the program had established and utilized effective risk management practices. The Navy implemented three requirements management best practices for the program but did not implement two. For example, one key practice implemented was managing requirements changes. Specifically, the program office demonstrated the ability to effectively manage changes to requirements as they evolved during the project. Program officials did this by documenting the alignment of requirements to its respective requirements changes, maintaining a history of requirements changes with rationale explaining the change request within its configuration documentation, and publishing requirements data using a database tool. However, the program did not fully implement the practice of maintaining traceability among requirements and work products to ensure that work products were in alignment. Further, with regard to the program’s bidirectional traceability tool, according to the July 2014 traceability tool, 25 specifications and 13 capabilities did not map to its respective work products. According to program officials, as of November 2015, 11 of the 25 specifications had been mapped, but due to an oversight, mapping of these specifications were not associated, while the remaining 14 specifications did not map to its respective capability production documentation to demonstrate completeness. According to program officials, 4 capabilities had been recently mapped but the remaining 9 capabilities listed in the capability production documentation did not map the traceability to the respective requirements. According to program officials, the mapping discrepancy of 7 capabilities was attributed to unfunded, obsolete, and programmatic requirements. Furthermore, after notifying the program of the gaps we identified, officials stated that they would take action to ensure mapping of 14 specifications and 2 capabilities to their respective requirements work products would be addressed. Regarding the alignment of requirements, the program’s requirements management plan had not been updated since May 2009 and software specifications and capabilities were not consistently maintained. According to program officials, the current requirements management plan had been previously reviewed and was determined to be suitable for the purpose of implementing requirements management best practices. According to the Capability Maturity Model® Integration for Acquisition, without a clear linkage between requirements all the lower-level requirements and capabilities, the program may not be effectively managing development efforts in accordance with the most recent requirements. Until work products are updated, the program cannot provide assurance that its requirements are aligned with the most updated work products and is at-risk of potential cost and schedule consequences. The Air Force implemented all seven risk management best practices for the DEAMS program. To the program’s credit, DEAMS demonstrated great strides in improving its risk management best practices. Since our prior MAIS review, the program has made improvements such as monitoring the status of each risk periodically, and ensuring that risk reports were up to date, which included the status of actions to mitigate risks. Other key practices include defining parameters to analyze and categorize risks, documenting risk, and developing risk mitigation plans in accordance with the risk management strategy, among other areas. In taking these and other actions, the DEAMS program had established and utilized effective risk management practices. The Air Force had implemented four requirements management best practices, but did not fully implement one requirements management practice, the practice of developing an understanding with providers on the meaning of requirements. For example, the Air Force ensured that project plans and work products were aligned with the most recent requirements. Specifically, the program maintained consistent documentation and oversight of work products, which included an up-to- date requirements management plan, system specifications, and capabilities documentation. However, while the program had established an adjudication process by which requirements were reviewed and approved, and implemented a test methodology to validate requirements prior to production installation, the function to ensure accountability was not working properly. As such, program officials did not determine whether key requirements were validated during system integration testing prior to deploying software into production, which was released with unresolved issues. Program officials subsequently resolved the issues without any negative impacts. The program office attributed the issue of not fully validating requirements to environmental issues that were considered to be acceptable risks and, subsequently, scheduled production installation in November 2014. Nevertheless, according to the Software Engineering Institute’s Capability Maturity Model® Integration for Acquisition, requirements should be analyzed to ensure that established criteria are met so that proper control functions are in place. Pursuant to its statutory responsibility to analyze, track, and evaluate risks, OMB requires agency CIOs to provide cost, schedule, and risk information for all major IT investments on the IT Dashboard. In addition, the IT Dashboard shows CIO names and photographs who are responsible for investments to increase accountability for IT acquisitions. As of October 2015, 27 of the 39 DOD MAIS programs were listed on the IT Dashboard. According to DOD officials, and in accordance with OMB policy, 8 MAIS programs that have not been funded in the President’s budget submission are not reported to the Dashboard, as appropriate. In addition, 4 MAIS programs have been designated by DOD as containing national security-sensitive information and were therefore classified and not subject to being reported on the Dashboard. According to DOD CIO and AT&L officials, the 8 unfunded MAIS programs will be reported to the Dashboard after the President’s 2017 budget submission has been finalized. However, the organization responsible for supervising MAIS acquisition programs—AT&L—is not represented on the Dashboard. Instead, the Dashboard publicly shows DOD’s CIO as the responsible party, pursuant to OMB’s direction, but is not accurate. AT&L has oversight responsibility for the acquisition performance of MAIS programs. In this regard, not only does AT&L supervise department acquisitions and establish its acquisition policies, but as the milestone decision authority for MAIS programs, the Under Secretary or his designee, has overall responsibility for each program. By contrast, the CIO is not involved in managing the performance of the MAIS programs but is responsible for submitting the rating to the Dashboard. Officials from DOD’s Office of the CIO and OMB’s Office of E- Government and Information Technology told us that they were aware of this inconsistency on the Dashboard but did not think it was a significant issue. Further, the DOD officials stated that since the CIO is involved in the rating process the representation of their office on the Dashboard is sufficient. Nonetheless, since only the DOD CIO is represented on the Dashboard, the public and other users may be unaware that AT&L has overall oversight for the acquisition performance of MAIS programs, minimizing the intended accountability the Dashboard is to provide. Since MAIS programs account for billions of dollars of DOD’s IT budget, it is important that the required critical change reports are timely so Congress has the necessary information to make budgetary and oversight decisions. While the reports contained the required elements, many were not submitted in a timely manner, potentially hampering Congress’ ability to make informed decisions. Further, all three selected programs implemented IT acquisition best practices for risk management and implemented most practices for requirements management. While this is a significant achievement, improvements can be made in managing requirements. Among other things, programs were operating without a current requirements management plan that was considered to be acceptable risks. Managing requirements effectively is especially necessary since MAIS programs are intended to help the department sustain its key operations. Finally, there is a lack of accountability for AT&L on the IT Dashboard. While OMB intended for accountability by requiring that major investments show agency CIOs as responsible, it did not consider that DOD’s AT&L is the responsible party for oversight of the acquisition performance of MAIS programs. Since the Dashboard does not reflect that AT&L has such responsibility, there is decreased public accountability. To help improve the management of MAIS programs, we are making six recommendations that: The Secretary of Defense examine the MAIS critical change reporting process to identify root causes for delays and implement corrective actions for the timely delivery of critical change reports. The Secretary of Defense develop a mechanism for monitoring whether MAIS programs with late reports are restricted from obligating funds and in turn ensuring compliance with the Antideficiency Act. The Secretary of the Army direct the TMC program manager to develop a requirements management plan to document and manage its requirements process. The Secretary of the Navy direct the CAC2S program manager to identify weaknesses in the requirements traceability process and take corrective actions to manage the traceability of requirements to the respective lower-level requirements, and periodically evaluate work products, including the requirements management plan, and update them in accordance with the requirements guidance. The Secretary of the Air Force direct the DEAMS program manager to address weaknesses in its controls for ensuring that all software requirements are tested and validated before deployment of new software releases. Director of OMB instruct the Federal CIO to add the Under Secretary of Defense for AT&L as a responsible party to DOD’s MAIS entries on the Federal IT Dashboard website, alongside the CIO, to publicly disclose the responsible party for the acquisition performance management of MAIS programs. We provided a draft of this report to DOD and OMB. We received written comments from DOD’s Acting Principal Deputy Assistant Secretary of Defense for Acquisition, which are reprinted in appendix III. In its comments, the department concurred with all five recommendations to improve oversight, IT acquisition practices, and tools used to manage MAIS programs. In e-mail comments, an official from OMB’s audit liaison group stated that OMB’s Office of E-Government and Information Technology does not agree with the recommendation to add AT&L to the IT Dashboard as a responsible party for MAIS programs but would work with DOD to address it. The official did not provide a rationale for this position or explain how OMB would work with DOD. Nonetheless, we continue to believe there is a lack of transparency and accountability for AT&L on the IT Dashboard. The IT Dashboard publicly shows DOD’s CIO as the responsible party, pursuant to OMB’s direction. However, AT&L has oversight responsibility for the acquisition performance of MAIS programs. In this regard, not only does AT&L supervise department acquisitions and establish its acquisition policies, but as the milestone decision authority for MAIS programs, the Under Secretary or his designee, has overall responsibility for each program. The DOD CIO is not involved in managing the performance of the programs and is only responsible for submitting the rating to the Dashboard. We believe that adding AT&L to the IT Dashboard would increase public accountability and leadership transparency for the acquisition management of MAIS programs. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Air Force, the Secretary of the Army, the Secretary of the Navy, the Office of Management and Budget, and other interested parties. This report also is available at no charge on the GAO website at http://www.gao.gov. Should you or your staffs have any questions on information discussed in this report, please contact me at (202) 512-4456 or ChaC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The National Defense Authorization Act for Fiscal Year 2012 includes a provision that we select, assess, and report on selected Department of Defense (DOD) major automated information system (MAIS) programs annually through March 2018. In addition, Senate Report 113-176 accompanying S. 2410 includes a provision that we evaluate DOD’s implementation of statutory reporting requirements for MAIS programs experiencing a critical change. Our objectives for this report were to (1) evaluate DOD’s implementation of statutory reporting requirements for MAIS programs experiencing a critical change; (2) describe the extent to which selected MAIS programs have changed their planned cost and schedule estimates, and met performance targets; (3) assess the extent to which selected MAIS programs have used key IT acquisition best practices, including risk management; and (4) determine the extent to which MAIS programs are accurately represented on the Federal IT Dashboard. To evaluate DOD’s implementation of statutory reporting requirements for MAIS programs experiencing a critical change—a schedule delay of 1 year or more, a full life-cycle cost increase of 25 percent or more over the original estimate, or a change that will undermine the system’s ability to perform as intended—we collected and analyzed information about critical changes from December 2008 to June 2014 and their corresponding reports. We assessed whether DOD had met reporting requirements by reviewing MAIS critical change reports and supporting documentation to determine if they included required written certifications stating that: the automated information system or IT investment to be acquired is essential to the national security or to the efficient management of the DOD; there is no alternative to the system or IT investment which will provide equal or greater capability at less cost; the new estimates of the costs, schedule, and performance parameters have been determined, with the concurrence of the Director of Cost Assessment and Program Evaluation, to be reasonable; and the management structure for the program is adequate to manage and control program costs. We did not look at the quality of the assessments and estimate done. We reviewed the reports to determine if the particular elements were included. We then summarized the number of elements addressed by program. In order to determine if the critical change reports met or exceeded the 60-day requirement, we collected all MAIS reports and compared the dates within them. We subtracted the date in which the reports were delivered to Congress via letter from the date in which the program manager provided the quarterly report to the senior DOD official responsible for the program in order to determine the number of days that had lapsed. Additionally, we rearranged the programs by number of days (from greatest to least) it took to deliver the critical change reports. We noted how many programs took more than 100 days, and 200 days to deliver the reports. We also interviewed DOD officials in order to ask their opinion on the length of time it takes to deliver a critical change report to Congress. We interviewed DOD officials responsible for the quality of the data and assessed their procedures for maintaining its accuracy and completeness. In addition, we examined the data for outliers or others extraordinary items. Based on these procedures, we have concluded that these data are sufficiently reliable for our purposes. Additionally, in order to determine whether DOD is tracking the obligation of funds during a program’s critical change, we interviewed DOD officials in order to determine what processes and tools they had in place to ensure that funds were not being obligated. To address the second and third objectives, we used DOD’s official list of 39 MAIS programs, as of February 25, 2015, to establish the basis for selecting the MAIS programs that were used to assess objectives two and three. We used the criteria below to select three MAIS programs. Any programs that were used in the prior two reviews should be excluded. Any programs that are fully deployed or cancelled should be eliminated from consideration. The program should not be new to the MAIS list; otherwise, there may not be sufficient acquisition activity and documentation to evaluate. The program must have a baseline in order to have a reference point for evaluating cost and schedule performance characteristics. The program cannot be a National Security Agency program. We included one program from each military department—Army, Air Force, and Navy in order to diversify the portfolio. Thus, we excluded any DOD-wide programs. We selected programs that had the lowest relative ratings on the Federal IT Dashboard as of April 2015. We preferred that the program be complex (e.g., integration across domains, global, critical to battle operations, etc.), rather than an upgrade. We preferred to select programs with funding profiles that are significant when compared to the rest of the portfolio. We considered issues identified from credible sources of information, such as Defense Acquisition Management Information Retrieval online resources, IT Dashboard ratings, etc. We filtered the original list of MAIS programs using the criteria above. Based on this filtering, we chose the following systems: the Air Force’s Defense Enterprise Accounting and Management System-Increment 1 (DEAMS Increment 1), the Army’s Tactical Mission Command (TMC), and the Navy’s Common Aviation Command and Control System Increment 1 (CAC2S Increment 1). To address the second objective, we analyzed and compared each selected program’s first acquisition program baseline cost estimate to the latest life-cycle estimate to determine the extent to which planned program costs had changed. Similarly, to determine the extent to which these programs changed their planned schedule estimates, we compared each program’s first acquisition program baseline schedule to the latest schedule. We relied on the thresholds established by statute to describe the amount of any deviation (i.e., significant or critical) that each program’s latest life-cycle cost and schedule estimates experienced from the first acquisition program baseline. To determine whether the selected programs met their performance targets, we compared program and system performance targets against actual performance data in test reports and program management briefings. We reviewed the results of operational assessments and program evaluations conducted on the systems. We also reviewed additional information on each program’s cost, schedule, and performance, including program documentation, such as DOD’s MAIS annual and quarterly reports; information from the Office of Management and Budget’s (OMB) IT Dashboard; acquisition program baselines; monthly status briefings; system test reports; and our prior reports. We also interviewed program officials from each of the selected MAIS programs to obtain additional information on cost, schedule, and performance. We provided our assessments to the program management offices of each selected program for comment. We aggregated and summarized the results of these analyses across the programs, as well as developed individual profiles for each program (see appendix II). To address the third objective, we analyzed each selected program’s IT acquisition documentation and compared it to key requirements management and risk management best practices—including Software Engineering Institute’s Capability Maturity Model® Integration for Acquisition (CMMI- ACQ) practices—to determine the extent to which the programs were implementing these practices. In particular, the key requirements management best practices we reviewed were: develop an understanding with the requirements providers on the meaning of the requirements, obtain commitment to requirements from project participants, manage changes to requirements as they evolve during the project, maintain bidirectional traceability among requirements and work, and ensure that project plans and work products remain aligned with requirements. Specifically, we analyzed program requirements documentation, including requirements management plans, requirements traceability matrices, requirements change forms, technical performance assessments, and requirements board meeting minutes. Additionally, we interviewed program officials to obtain additional information about their requirements management practices. Additionally, we reviewed the following key risk management best practices: determine risk sources and categories; define parameters used to analyze and categorize risks and to control the risk management effort; establish and maintain the strategy to be used for risk management; identify and document risks; evaluate and categorize each identified risk using defined risk categories and parameters, and determine its relative priority; develop a risk mitigation plan in accordance with the risk management monitor the status of each risk periodically and implement the risk mitigation plan as appropriate. Specifically, we analyzed program risk documentation, including monthly risk logs and reports, risk-level assignments, risk management plans, risk mitigation plans, and risk board meeting minutes. Additionally, we interviewed program officials to obtain additional information about their risks and risk management practices. To address the fourth objective, we used DOD’s official list of MAIS programs, as of February 25, 2015. These programs were used as basis to determine whether programs were reported to the Federal IT Dashboard. To do so, we: Exported the IT portfolio program data reported to the Federal IT Dashboard by DOD in fiscal year 2015, and compared it to the 39 programs on DOD’s official list of MAIS programs. For those programs that were found to not be reported on the Dashboard, we met with agency officials from the Office of the Chief Information Officer (OCIO) and the Under Secretary of Defense (Acquisition, Technology, and Logistics) (AT&L) to determine the reasons for not reporting. We also interviewed OCIO officials to obtain information on the processes used by the OCIO when reviewing programs for IT Dashboard updates. Further, we interviewed officials at OMB to obtain their views on representing the CIO as the sole party responsible for programs reported to the IT Dashboard was accurate. Specifically, we discussed DOD’s unique structure that AT&L is responsible for the acquisition performance of MAIS programs but is a separate organization from the OCIO. We conducted this performance audit from April 2015 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This section contains profiles of the three selected major automated information system (MAIS) programs for which we determined whether they had changed their planned cost and schedule estimates and met performance measures. Each profile presents data on the program’s purpose and status, its latest cost and schedule estimates compared to the first acquisition program baseline (where established), as well as system performance data. The first page of each two-page profile contains a description of the program’s purpose and a figure that provides a comparison of the program’s first acquisition program baseline to the program’s latest schedule. The years depicted on the figure represent calendar years, and the milestones represent the program’s best estimates of dates for those milestones. The program’s start represents the date that program officials reported that they first started work on the program. The first page also provides (1) essential program details, such as the name of the prime contractor, the total number of active contractors— which includes the prime contractor—and any other contractors (and in some cases subcontractors) supporting the program; (2) program costs (in then-year dollars), comparing the program’s latest life-cycle cost estimate (separated into acquisition and operations and maintenance costs) to its first acquisition program baseline (subsequent acquisition program baselines that may have been established are not identified), (3) locations to which the system will be deployed; and (4) a summary of the cost, schedule, and performance of each program, which is further discussed on the second page of the profile. The symbols, denoted by arrows or filled circles, included in the summary box on the first page of each profile and in the headings on the second page represent whether a program’s cost estimate had increased (^), decreased (V), or stayed within (i.e., not to exceed threshold) planned cost estimate (i.e., () and whether the program’s schedule estimate had slipped (>),been accelerated to meet milestones earlier than planned (<), or stayed within (i.e., not to exceed threshold) planned schedule estimate () of meeting milestones. The second page of each profile provides detailed information on each program’s status, costs, schedule, and performance. In the status section, we discuss recent and upcoming milestones and events for each program. In the cost section, we identify the extent to which the program’s life- cycle cost estimate has changed from its first acquisition program baseline, as well as the causes for any changes identified. In the schedule section, we discuss the extent to which the program’s schedule has changed from its first acquisition program baseline, and the causes for any schedule changes identified. In the performance section, we identify the extent to which each program has met its established measures, as well as discuss the results of system performance tests. These performance ratings represent a point-in-time assessment as reported by the program. System performance targets were rated as “met” when (1) system tests were passed with no deficiencies or limitations, (2) the program fully met all of its key performance parameters, or (3) a program had addressed all deficiencies or limitations that were identified during system tests. System performance was rated as “not fully met” when a program either (1) did not fully pass system testing and was still in the process of addressing the deficiencies or limitations identified during system testing or (2) did not pass system testing and subsequently removed the problematic functionality from the system in order to pass subsequent system tests, instead of fixing the problematic functionality and keeping it in the planned release of the system. Army Tactical Mission Command (TMC) TMC is a suite of products—comprised of hardware and software equipment and elements—that are intended to provide the Army commanders and their staff with mission command capabilities, such as real-time situational awareness and a user-defined common operational picture. TMC products are fielded worldwide and are intended to support decision-making, planning, rehearsal, and execution management. TMC is now engaged in transitioning to the web-based Command Post Computing Environment. One key element of this new environment — known as Tactical Applications—is aimed to minimize administrative burdens on the user, and simplify the overall Mission Command collaborative experience. All of the products included in TMC are post-development and in production. The program continues to field equipment and perform a technical refresh of the hardware and software in the field. The program is also working to resolve sustainment metric issues and updating their life- cycle sustainment plan. Exceeded Planned Cost Estimate (^) TMC’s planned total life-cycle cost estimate has increased by 19 percent from the program’s first acquisition program baseline estimate of approximately $1.97 billion. Specifically, as of January 2016, the life-cycle cost estimate was approximately $2.34 billion. Program officials reported that the increased costs were attributed to research and development testing and evaluation cost estimate breach that was reported to Congress. The Army’s TMC program estimated program development cost increased by 45 percent over the original estimate due to program scope changes derived from the realignment of Command Post of the Future as a foundation for Mission Command Collapse, the integration of Personalized Assistant that Learns, and the incorporation of future force requirements. Stayed within Planned Schedule Estimate () As of January 2016, the program had experienced a 3 month slippage in its full deployment date compared to its first acquisition program baseline of September 2018. The slippage was within the pre-established threshold allowance to account for minor shifts in program schedule. Program officials stated that, although the Command Post of the Future product is 95 percent fielded and is on schedule to reach full deployment by December 2018, continued support of the Command Post Computing Environment is needed beyond fiscal year 2019. Program officials considered the slippage to be of low risk and will continue to operate within the planned schedule estimate. As of January 2016, the TMC program met all three of its key performance parameters, including net-centric military operations, disseminate orders with future Army and Joint C2 systems, and displaying unified information on subject matters, such as friendly and enemy forces. Navy Common Aviation Command and Control System (CAC2S) Increment 1 CAC2S is an integrated and coordinated modernization effort for the equipment of the Marine Air Command and Control System and is intended to provide enhanced capability for three defense centers to support aviation employment in joint, combined, and coalition operations. CAC2S provides the tactical situational display, information management, sensor and data link interface, and operational facilities for planning and execution of Marine Aviation missions within the Marine Air Ground Task Force. It is intended to replace existing aviation command and control equipment from 12 legacy systems. CAC2S Increment 1 will eliminate the Air Command and Control systems and will capability for aviation combat direction and air defense functions by providing a single networked system. CAC2S Increment 1, which comprises of two phases, is currently in post- milestone C where the phase 2 system is now in development. CAC2S’s current work consists of developmental testing and the production of the limited deployment units in preparation for the March 2016 Initial Operational Test and Evaluation. To help achieve its goals of a successful Initial Operation Test and Evaluation, a phase 2 milestone C decision was authorized in February 2015 for CAC2S to procure four limited deployment units. As of October 2015, CAC2S had delivered all four limited units to support current developmental testing. Production of the limited deployments units are on schedule and planning activities are underway. Regarding the developmental testing, program officials anticipate favorable test results after subsequent software enhancements had been made to address software concerns identified in prior developmental testing. Exceeded Planned Cost Estimate (^) As of October 2015, CAC2S’s life cycle cost estimate was $2 billion, which was about a 477 percent increase from its first acquisition program baseline estimate of $347 million established in August 2000. As previously reported, factors that attributed to the cost increase were early challenges in estimating costs due to program scope growth and restructuring. According to program documentation, operations and support expenditures of approximately $1.6 billion for the production of milestone C had been carried over into the program’s total life-cycle cost estimate. However, since our previous report, program documentation indicated that improvements to the cost position are being made. Specifically, the milestone C service cost position, dated February 2015, produced a cost avoidance of $54.4 million compared to its 2010 cost assessment. As of October 2015, the program’s latest life-cycle estimate relative to its November 2010 production acquisition program baseline cost estimate had decreased about 19 percent. Program officials attributed this decrease due to the program embracing the DOD Better Buying Power initiatives that benefitted from competitive market forces that drove down cost. Exceeded Planned Schedule Estimate (>) As of October 2015, CAC2S Increment 1’s estimated full deployment date was March 2022, which represented a 13 year and 9 month schedule slip from the program’s first acquisition program baseline schedule estimate. As previously reported, factors that attributed to the schedule delay included the addition of new requirements and program restructure. Program officials stated that subsequent to our prior report, the program has been executing in accordance within its approved schedule. As of October 2015, the estimated milestone C phase 2 was delayed by 6 months from the program’s production acquisition program baseline schedule but, as stated above, achieved its milestone. Program officials attributed this delay, in part, to administrative factors, which included the review and approval process of getting signature approval. CAC2S successfully achieved milestone C phase 2 approval in February 2015 but the acquisition decision memorandum had not been signed until March 2015. As of October 2015, CAC2S program documentation reported that it was meeting both of its key performance parameters related to net-ready and data fusion. Program officials stated that, during testing of the program’s key performance parameters, its net-ready and data fusion performance targets were both met, while many attributes for the data fusion key performance parameter were consistently above the threshold for being met. Air Force Defense Enterprise Accounting and Management System (DEAMS) Increment 1 The DEAMS Increment 1 program is intended to provide the Air Force with the entire spectrum of financial management capabilities, including collections; commitments and obligations; cost accounting; general ledger; funds control; receipts and acceptance; accounts payable and disbursement; billing; and financial reporting DEAMS is also intended to be a key component of DOD’s solution for achieving fully-auditable financial statements by September 30, 2017, as required by the National Defense Authorization Act for Fiscal Year 2010. As of November 2015, the DEAMS program is working to achieve full deployment decision by February 2016. In August 2015, the initial operational test and evaluation report, conducted by the Air Force Operational Test and Evaluation Center, indicated a number of findings requiring remediation prior to the February 2016 full deployment decision. Nevertheless, DEAMS was granted a limited deployment decision, but the program experienced a significant change as a result of breaching the full deployment decision threshold for a timing issue only. DEAMS current work efforts consist of deployment to new users and remaining 35 sites, capability development for deployment, training new users, and resolving initial operational test and evaluation findings. Exceeded Planned Cost Estimate (^) As of October 2015, DEAMS’s latest life-cycle cost estimate was about $1.56 billion, which was about a 9 percent increase from its first acquisition program baseline estimate of approximately $1.43 billion— established in February 2012. Program officials attributed this increase, in part, to program scope growth due to addition of requirements from increment 2 and the addition of a second Oracle software upgrade projected for 2021. Exceeded Planned Schedule Estimate (>) DEAMS experienced a 6 month slippage in its milestone C but successfully attained milestone C approval within the established threshold. Program officials did not provide a rationale for factors that attributed to this delay but maintained that the program operated within the threshold requirements. DEAMS also experienced a 1 year slippage in its full deployment decision date—currently scheduled for February 2016. Program officials attributed this slippage due to findings identified DEAMS’s initial operational test and evaluation report. Did Not Fully Meet System Performance Targets As of October 2015, DEAMS program officials reported that it did not meet all of its nine key performance parameters. Specifically, DEAMS did not meet five key performance parameters: Balance with Treasury, Accurate Balance of Available Funds, Timely Report, Period-End Processing, and Net-Ready. For example, an initial operational test and evaluation report, listed above, identified system performance issues, which included unstable change management issues, transaction backlogs, and ineffective reporting tools. Subsequently, the Air Force Operational Test and Evaluation Center provided 29 recommendations for the Air Force to implement to support the successful fielding of DEAMS Increment 1, 17 of which were documented as being completed, while corrective action for the remaining 12 are still underway. The program is expected to demonstrate improvement before it will be authorized to be deployed to all users. In addition to the contact name above, the following staff also made key contributions to this report: Eric Winter, Assistant Director; Ronalynn (Lynn) Espedido; Corey Evans; Rebecca Eyler; Franklin Jackson; Kate Nielsen; John Ortiz; Kathleen Sharkey; and Jeanne Sung. | The National Defense Authorization Act for Fiscal Year 2012 includes a provision for GAO to select, assess, and report on DOD MAIS programs annually through March 2018. MAIS programs are intended to help the department sustain its key operations. This report: (1) evaluates DOD's implementation of statutory reporting requirements for MAIS programs experiencing a critical change; (2) describes the extent to which selected MAIS programs have changed their planned cost and schedule estimates, and met performance targets; (3) assesses the extent to which selected MAIS programs have used key IT acquisition best practices, including requirements and risk management; and (4) determines the extent to which MAIS programs are represented on the Dashboard. GAO compared information on programs with a critical change to the reporting requirements. GAO selected three programs based on factors, such as representation from each military service (Air Force, Army, and Navy), identified changes to cost, schedule, and performance, and assessed them against selected best practices. GAO traced the programs to the Dashboard and reviewed relevant processes. All 18 major automated information system (MAIS) programs that experienced a critical change to program cost, schedule, or system performance targets submitted complete reports to Congress that contained all four statutory elements, but 16 programs did not meet the requirement to report to Congress within 60 days of the program manager's submission to the senior Department of Defense (DOD) official that led to the critical change determination. Of the 16 critical change reports that exceeded the 60-days to report, 10 of the programs took over 100 days. Officials said that 60 days is too short to perform a program evaluation. Since the reports were not always timely, Congress may not have the necessary information when it is needed to make decisions. Finally, the DOD did not demonstrate that it had an internal control to ensure that MAIS programs not in compliance with reporting requirements were restricted from obligating funds on major contracts as required by law. All three MAIS programs GAO selected to review experienced changes in their cost and schedule estimates, and one program did not fully meet its technical performance targets (see table). Source: GAO analysis of data provided by DOD officials. | GAO-16-336 a Delay was attributed to a major change in project scope and restructuring of the program. The three selected programs implemented all seven IT acquisition best practices for risk management, and most of the best practices were implemented for requirements management: the Army and Navy implemented three of five best practices and the Air Force implemented four of five best practices. For example, the Army program did not adequately manage requirements changes and ensure that deliverables were in alignment with requirements. Until the programs fully implement best practices for requirements management, management of development efforts will likely be impaired. As of October 2015, all appropriate programs were represented on the Federal IT Dashboard (Dashboard) as required by the Office of Management and Budget (OMB); however, the organization responsible for performance of MAIS programs was not provided. Specifically, DOD's Chief Information Officer is shown as the responsible party because OMB requires this, but the Under Secretary of Defense for Acquisition, Technology, and Logistics (AT&L) has overall responsibility for the MAIS programs. Therefore, users of the Dashboard are unaware that AT&L is the responsible organization and, thus, public accountability of the MAIS programs is decreased. GAO recommends, among others, that DOD examine the critical change reporting process and implement corrections for the reports' timeliness, and address weaknesses with requirements management, and add AT&L as a responsible organization for MAIS programs to the Dashboard. DOD concurred with all recommendations. OMB did not concur but GAO continues to believe that improved transparency is needed. |
In its September 2015 report, DOD addressed the committee direction to identify root causes regarding the improper documentation and packaging of HAZMAT shipments and any needed corrective actions, but it is too soon to determine the effectiveness of the department’s efforts. DOD identified the root causes of the improper documentation and packaging of HAZMAT shipments and developed a plan of action with milestones to address them. In its September 2015 report, DOD used an approach involving stakeholders from throughout DOD’s transportation system. According to officials, the Office of the Deputy Assistant Secretary of Defense for Supply Chain Integration coordinated the stakeholders’ efforts. According to its officials, the office established a working group comprising stakeholder representatives from the Office of the Deputy Assistant Secretary of Defense for Supply Chain Integration, the Defense Logistics Agency, TRANSCOM, the Air Mobility Command, the Army’s Surface Deployment Distribution Command, and the Government Services Administration. As part of this working group, these stakeholders analyzed HAZMAT transportation data from, among other DOD transportation sources, the Global Air Transportation Execution System, the Defense Logistics Agency, and the Web Supply Discrepancy Reports, according to officials. As a result of this approach, DOD identified in its report contract- and documentation-related issues and human error as root causes of improper documentation and packaging of HAZMAT. For example, air HAZMAT shipment documentation and packaging discrepancies resulted from HAZMAT shipments arriving at an aerial port without certain required documentation for air transportation. Specifically, according to the report, missing documentation included confirmation of air clearance, the Advance Transportation Control and Movement Document, and Shipper’s Declaration of Dangerous Goods. In addition, DOD noted in its report that contract- and process-related issues contribute to HAZMAT shipments arriving at an aerial port not documented or packaged correctly for air transportation. According to the report, for many shipments delivered directly from the vendors, contracts did not clearly specify for vendors when they must prepare HAZMAT for air shipment, to include preparing required documentation, requesting air clearance, and packaging in accordance with the applicable rules for that mode of transportation. In addition, some contracts did not instruct vendors to use the Defense Logistics Agency’s Vendor Shipping Module website, which could have assisted them in requesting air clearance, submitting advance transportation movement and control data, and printing military shipment labels. Further addressing the committee direction, DOD in its September 2015 report included a plan of action and milestones to address the root causes identified and specified over 40 corrective actions. Our analysis found that these corrective actions generally align with the root causes that DOD identified. Examples follow of corrective actions intended to reduce the percentage of HAZMAT shipments with missing, incomplete, or inaccurate documentation arriving at aerial ports and to improve the reporting of HAZMAT shipment documentation and packaging discrepancies: reducing the percentage of HAZMAT shipments from government shippers arriving at aerial ports with missing documentation by notifying shippers of the requirement to prepare HAZMAT for shipment to the final destination, to include preparing required documentation, requesting air clearance, and repacking in accordance with applicable rules; reducing the percentage of HAZMAT shipments arriving at aerial ports with incorrect or incomplete documentation by investigating and recommending alternatives for reducing human error in HAZMAT documentation; and improving the reporting of HAZMAT shipment documentation and packaging discrepancies by adding specific data elements for HAZMAT shipments in a web-based supply discrepancy reporting system, such as WebSDR—an automated process used to report shipping or packaging discrepancies and to provide appropriate responses and resolution. During the course of our review, and according to DOD officials, we found that as of February 2017 DOD had modified and added new corrective actions based on feedback from the working group that developed the September 2015 report and had conducted further analysis of the documentation and packaging issues addressed there. According to DOD officials, the working group of stakeholders involved in this effort meets every quarter to elicit input from its members and update the plan of action and milestones as necessary to reflect actions taken in the field. For example, DOD officials reported that the plan of action and milestones has been updated in response to Joint Base McGuire-Dix- Lakehurst implementing the use of WebSDR in September 2016 and the Defense Logistics Agency implementing a contract provision in January 2017 to help ensure proper documentation and packaging of shipments arriving at aerial ports. As DOD implements its corrective actions, it continues to face issues with improper documentation and packaging causing delayed cargo, according to DOD officials. According to the Global Air Transportation Execution System data that we reviewed for HAZMAT shipments at the five major aerial ports and according to DOD officials whom we interviewed during the course of our review, there continue to be a number of HAZMAT shipments that were not documented and packaged properly. According to DOD officials, these documentation and packaging discrepancies can result in shipment delays that can be as short as a few hours or last several days, depending on the nature of the issue. Based on DOD’s Global Air Transportation Execution System database and according to transportation officials, of the 62,703 total shipments of HAZMAT received from October 1, 2013, through March 9, 2017, 34,040 shipments or 54.2 percent were delayed. Of the 34,040 delayed shipments, 19,858 or 58.3 percent were delayed primarily because they were not in compliance with the Defense Transportation Regulation requirements for documentation and packaging, and 14,183 shipments or 41.6 percent were delayed for security-clearance-related reasons (i.e., for shipments supporting foreign assistance missions) that are out of the control of the shipper or DOD. Following are examples of HAZMAT shipments that we identified in 2016 during the course of our review that were delayed for documentation and packaging issues: Scissor lifts were delayed in transport to Bagram Air Base in Afghanistan because, according to DOD transportation officials, they arrived without shipping documentation, air certifications, and shoring (packaging). According to the Defense Transportation Regulation, the transportation officer must ensure that HAZMAT is properly marked, packaged, and labeled for air transportation. However, the shipping papers for the scissor lifts did not identify Bagram Air Base as the destination. In addition, there was ineffective coordination between the vendor, shipper, and contracting officer (who is located at Bagram Air Base), according to the DOD transportation officials. Cylinders filled with refrigerant gas were delayed because, according to DOD officials, the Shipper’s Declaration for Dangerous Goods was missing or incomplete, and the cylinders were not properly packaged for shipment. According to the Defense Transportation Regulation, the transportation officer must ensure that HAZMAT is properly marked, packaged, and labeled for air transportation. A skid of lithium-ion batteries was delayed because, according to DOD officials, the Shipper’s Declaration for Dangerous Goods was missing the correct gross weight. The Defense Transportation Regulation requires that when transporting HAZMAT by air, the shipper must accurately reflect the quantity of the material to be transported. DOD officials noted that is too early to evaluate the effectiveness of the corrective actions identified in the report’s plan of action and milestones to mitigate delays in transporting HAZMAT. According to DOD officials, most of the corrective actions were to begin in late fiscal year 2016 and the key performance measures for assessing those and the remaining actions will not be completed until late fiscal year 2017.The officials added that it will take time to accumulate data and conduct subsequent analyses to determine the efficacy of actions that have already been taken or are currently in progress. We agree with DOD that is too early to evaluate and determine the efficacy of the corrective actions. DOD officials also told us that they recognize that root causes are long- standing reasons for transportation delays, that several studies have documented the delayed cargo issues at aerial ports of embarkation, and that the issue of delayed cargo will never go away completely. However, the officials noted that the corrective actions will help the warfighter receive needed equipment without delay. By continuing to identify and address the root causes for these transportation delays, the department will be able to continue to develop and implement corrective actions to mitigate delays of its HAZMAT shipments in support of the warfighter. In its September 2015 report, DOD addressed the committee direction to report on the extent to which it used TPS for HAZMAT shipments that could have been safely and securely transported using less costly alternatives, and identified two corrective actions. However, the report did not fully disclose the assumptions and limitations associated with its analysis. In its report, DOD concluded that between June 1, 2013, and July 31, 2014, it had used TPS motor carriers to transport 518 of 31,373 HAZMAT shipments that could have been transported using less costly alternatives and that doing so had resulted in a total unnecessary cost of approximately $126,000. DOD reported that in the majority of these instances transportation officers had either made errors or conscious decisions that two drivers were necessary to meet tight time frame delivery requirements or, when transportation officers could not obtain complete cargo documentation to determine whether a TPS designation was needed, they had erred on the side of safety and opted to use the TPS carriers to transport the shipments. DOD reported that it had calculated the approximately $126,000 in unnecessary costs by accounting for the “accessorial charges” for TPS carriers–the additional costs incurred for providing constant surveillance, dual-driver protection, protective security service, and satellite motor surveillance. DOD concluded that it had infrequently used TPS carriers unnecessarily and that the additional costs incurred for doing so were relatively low. DOD developed and conducted a one-time analysis to address the committee direction. According to DOD’s Transportation Policy officials, DOD developed this analysis in order to respond to the congressional direction since the department lacked an established procedure to extract and match the tens of thousands of records of HAZMAT shipments transported by motor, rail, or air carriers using TPS each year and calculate the costs for doing so. DOD’s analysis included data for motor and air TPS HAZMAT shipments transported from June 1, 2013, through July 31, 2014, and data for rail HAZMAT shipments transported through TPS from January 1, 2014, through June 30, 2014. The Transportation Command’s Surface Deployment and Distribution Command, which manages the TPS program for DOD, provided these data for DOD’s analysis. Next, DOD used TPS guidance in the Defense Transportation Regulation as criteria to identify scenarios within the data set where DOD shipping activities may have ordered TPS for shipments when not required. An item’s Controlled Item’s Inventory Code located in the Federal Logistics Information System determines whether an item requires TPS. According to officials because the Surface Deployment and Distribution Command TPS data lack cost information, DOD matched the bill of lading records in the data system to shipment entries in DOD’s Third Party Payment System to determine the cost of TPS shipments that could have been safely and securely transported using less costly means. DOD was able to match most of the bill of lading records in the two systems. According to DOD, the shipments that they could not match were likely never invoiced by the transportation provider. In reviewing DOD’s September 2015 report, supporting documentation and interviewing agency officials, we identified assumptions and limitations that the report did not disclose in its findings and conclusions. Absent these disclosures, decision makers cannot be assured this report is conveying quality information. Generally accepted research standards and Standards for Internal Control in the Federal Government call, among other things, for a study methodology to explicitly identify any assumptions and limitations and for any data obtained to be processed into quality information. Following are examples of assumptions and limitations the report did not include in its description of its analysis: The number of shipments requiring TPS varied depending on the assumptions used: For example, during our review we requested that Surface Deployment and Distribution Command officials provide TPS shipment data for the same time frame DOD used in the September 2015 report (June 1, 2013 through July 31, 2014). The data that they provided showed there were a potential of 1,097 TPS motor shipments that may not have required TPS, compared with the 518 TPS motor shipments referred to in the report. Surface Deployment and Distribution Command officials said the difference between the two totals could be a result of how DOD applied certain Controlled Item Inventory Codes when it determined whether an item required TPS to create the data set of TPS shipments that was subsequently analyzed, and how they counted non-TPS shipments that were transported with TPS shipments. DOD used inconsistent dates in its collection of shipment data in its analysis: The DOD report shows that DOD analyzed HAZMAT shipment data from the Surface Deployment and Distribution Command for the period June 1, 2013, through July 31, 2014. We found, however, that this 13-month time frame included data for only motor and air TPS shipments. For rail TPS shipments, DOD used data from a 6-month time frame (January 1, 2014, through June 30, 2014). DOD officials explained that the time frames used for their analysis were driven by which data were available at the time of their request. We believe disclosing this limitation would have provided decision makers with important information about the reliability of the reported results. DOD omitted air and rail TPS HAZMAT shipments and their costs: DOD omitted 10 HAZMAT shipments transported by rail and 4 HAZMAT shipments transported by air from the shipments analyzed in its report, which together total about $4,525 in unnecessary costs. These 14 shipments were omitted because, in DOD’s estimation, their number and their costs were low and therefore statistically insignificant. Given that DOD concluded that TPS was used only infrequently when not required, we find it inconsistent that the additional 14 rail and air TPS shipments and their costs were omitted from the report, and led to the report conveying incomplete results. We believe that, in accordance with our generally accepted research standards, DOD should have fully explained its rationale for such an omission in its discussion of its methodology. DOD did not disclose the use of average cost estimates: According to DOD, on 10 of the 518 invoices for TPS motor carrier shipments, DOD found the TPS charges exceeded the total amount paid to the carrier. Including those charges would significantly skew the results of the analysis, according to DOD. For those 10 invoices, DOD applied an average cost to estimate the amounts paid for TPS. DOD estimated average costs using three different methods: average TPS cost per shipment, average TPS cost per mile, and average TPS cost as a percentage of the total. DOD chose to use the average cost per mile as it produced the highest cost estimate for the 10 invoices according to the DOD documents. We believe that disclosing the use of TPS shipment cost averages in DOD’s cost calculation for these 10 HAZMAT shipments would inform decision makers about the potential effect of this approach on the reported results. DOD officials acknowledged that DOD’s September 2015 report lacked details regarding the assumptions and limitations made in DOD’s analysis. However, the officials noted that, because the number of improper TPS shipments is relatively low and the range of the potential cost of these shipments is also relatively low, these assumptions and limitations did not affect the department’s general conclusion that DOD had infrequently used TPS unnecessarily to transport HAZMAT and that the additional cost incurred was relativity small. Reviewing the data DOD provided in support of its analysis, we have reasonable assurance that DOD was correct in its general conclusion that DOD had infrequently used TPS during the period studied and that the additional cost associated with these shipments was relatively small. In addition, DOD identified corrective actions to preclude the future unnecessary use of TPS. Specifically, as part of its plan of action and milestones, DOD plans to publish advisories reiterating TPS usage criteria to transportation officers. An advisory in November 2015 advised transportation officers that they should request TPS for shipments only if required by the Defense Transportation Regulation. To monitor compliance with the customer advisories and the Defense Transportation Regulation, DOD plans to conduct spot checks of TPS HAZMAT shipment data every 2 years beginning in November 2017, according to DOD officials. According to the February 2017 DOD plan of action and milestones, DOD also plans to identify and evaluate policy options to ensure proper coding of HAZMAT items in the Federal Logistics Information System by September 30, 2018. We anticipate that these actions, if properly implemented, will help ensure that TPS is only used when necessary. We provided a draft of this report to DOD, and DOD responded that it would not be providing comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Director of the Defense Logistics Agency, the Commander of the U.S. Transportation Command, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in the appendix. In addition to the contact named above: James A. Reynolds, Assistant Director; Colin Chambers; Pat Donahue; Alfonso Garcia; Alexandra Gonzalez; Mae Jones; Ruben Montes de Oca; and Michael Shaughnessy made key contributions to this report. | Commercial carriers transport over 3 billion tons of HAZMAT in commerce in the United States each year, transporting an estimated 1 million HAZMAT shipments per day. DOD relies heavily on commercial carriers to transport HAZMAT, using them to transport about 90 percent of the department's HAZMAT shipments. DOD uses the TPS program to transport certain sensitive materials including ammunition and classified materials that follow more stringent safety and security standards. House Report 113-446 accompanying a bill for the National Defense Authorization Act for Fiscal Year 2015 directed DOD to report on the root causes of improper documentation and packaging of HAZMAT; the extent to which TPS is used for materials that could be transported using less costly means; and any needed corrective actions and a plan, with milestones, to address them. The House report also included a provision for GAO to review DOD's report. DOD issued its report in September 2015. This report examines the extent to which DOD (1) identified the root causes of improper documentation and packaging of HAZMAT shipments and any corrective actions taken since the report's issuance and (2) reported on the department's use of TPS carriers to transport shipments that could have been safely and securely transported using less costly alternatives. GAO examined DOD's HAZMAT data and found the data it examined sufficiently reliable for the purposes of the review. DOD reviewed a draft of this report and did not have any comments. The Department of Defense (DOD) has addressed the committee direction to identify the root causes regarding the improper documentation and packaging of hazardous materials (HAZMAT) shipments and any needed corrective actions, but it is too soon to evaluate the effectiveness of these efforts. In its September 2015 report, DOD identified: contract- and documentation-related issues and human error as the root causes, several corrective actions—such as improved reporting—that aligned with these root causes, and milestones and DOD stakeholders to implement the corrective actions. In addition to aligning with the DOD-identified root causes, the corrective actions also align with the root causes of improper documentation and packaging that GAO identified in its May 2014 report. However, it is too early to determine the efficacy of these corrective actions. According to DOD officials, most of the corrective actions were to begin in late fiscal year 2016, and the key performance measures for assessing those and the remaining actions will not be fully completed until late fiscal year 2017. DOD has addressed the committee direction to report on the extent to which the department had used Transportation Protective Services (TPS) for HAZMAT shipments that could have been safely and securely transported using less costly alternatives, but did not include in its September 2015 report detail on the assumptions or limitations made underpinning its analysis. In its analysis, conducted specifically to address the committee direction, DOD concluded that it had used TPS infrequently when not required between June 1, 2013, and July 31, 2014. Specifically, DOD reported it used TPS to transport 518 of 31,373 HAZMAT shipments that it could have transported using less costly alternatives. This resulted in a total unnecessary cost of approximately $126,000, according to DOD. While GAO found DOD did not include detail on the assumptions or limitations underpinning its analysis, GAO concurs with the report's general conclusion that DOD had infrequently used TPS unnecessarily to transport HAZMAT during the period studied and that the additional cost associated with these shipments was relatively small. Further, as part of its plan of action, DOD has identified corrective actions to preclude future unnecessary use of TPS, which, if properly implemented, should help ensure that in the future DOD uses TPS only when necessary. |
Our April 1996 report found that since deregulation, fares have fallen and service has improved for most large-community airports. Our report also found that substantial regional differences exist in fare and service trends, particularly among small- and medium-sized community airports. A primary reason for these differences has been the greater degree of economic growth that has occurred over the past two decades in larger communities and in the West and Southwest. In particular, we noted that most low-fare airlines that began interstate air service after deregulation, such as Southwest Airlines and America West, had decided to enter airports serving communities of all sizes in the West and Southwest because of those communities’ robust economic growth. By contrast, low-fare carriers had generally avoided serving small- and medium-sized-community airports in the East and upper Midwest, in part because of the slower growth, harsher weather, and greater airport congestion in those regions. Our review of the trends in fares between 1979 and 1994 for a sample of 112 small-, medium-sized, and large-community airports identified 15 airports at which fares, adjusted for inflation, had declined by over 20 percent and 8 airports at which fares had increased by over 20 percent. Each of the 15 airports where fares declined was located in the West or Southwest, and low-fare airlines accounted for at least 10 percent of the passenger boardings at all but one of those airports in 1994. On the other hand, each of the eight airports where fares had increased by over 20 percent since deregulation was located in the Southeast and Appalachia. Our April 1996 report also revealed similar findings concerning the trends in service quantity and quality at the 112 airports. Large communities in general, and communities of all sizes in the West and Southwest, had experienced a substantial increase in the number of departures and available seats as well as improvements in such service quality indicators as the number of available nonstop destinations and the amount of jet service. However, without the cross-subsidy present under regulation, fares were expected to increase somewhat at airports serving small and medium-sized communities, and carriers were expected to substitute turboprop service for jet. Over time, smaller and medium-sized communities in the East and upper Midwest had generally experienced a decline in the quantity and quality of air service. In particular, these communities had experienced a sharp decrease in the number of available nonstop destinations and in the amount of jet service relative to turboprop service. This decrease occurred largely because established airlines had reduced jet service from these airports since deregulation and deployed turboprops to link the communities to those airlines’ major hubs. We subsequently reported in October 1996 that operating barriers at key hub airports in the East and upper Midwest, combined with certain marketing strategies of the established carriers, fortified established carriers’ dominance of those hub airports and routes linking those hubs with nearby small- and medium-sized-community airports. In the upper Midwest, there is limited competition in part because two airlines control nearly 90 percent of the takeoff and landing slots at O’Hare, and one airline controls the vast majority of gates at the airports in Minneapolis and Detroit under long-term, exclusive-use leases. Similarly, in the Southeast and Appalachia, one airline controls the vast majority of gates under exclusive-use leases at Cincinnati, Charlotte, and Pittsburgh. Finally, in the Northeast, a few established airlines control most of the slots at National, LaGuardia, and Kennedy. As a result, the ability of nonincumbents to enter these key airports and serve nearby small and medium-sized communities is very limited. Particularly for several key markets in the upper Midwest and East, the relative significance of those operating barriers in limiting competition and contributing to higher airfares has grown over time. As a result, our October 1996 report, which specifically addressed the effects of slot and perimeter rules, recommended that DOT take action to lower those barriers, and highlighted areas for potential congressional action. To reduce congestion, FAA has since 1969 limited the number of takeoffs and landings that can occur at O’Hare, National, LaGuardia, and Kennedy. By allowing new airlines to form and established airlines to enter new markets, deregulation increased the demand for access to these airports. Such increased demand complicated FAA’s efforts to allocate takeoff and landing slots equitably among the airlines. To minimize the government’s role in the allocation of slots, DOT in 1985 began to allow airlines to buy and sell them to one another. Under this “Buy/Sell Rule,” DOT “grandfathered” slots to the holders of record as of December 16, 1985. Emphasizing that it still owned the slots, however, DOT randomly assigned each slot a priority number and reserved the right to withdraw slots from the incumbents at any time. In addition, to mitigate the anticompetitive effects of grandfathering, DOT retained about 5 percent of the slots at O’Hare, National, and LaGuardia and in early 1986 distributed them in a random lottery to airlines having few or no slots at those airports. In August 1990, we reported that a few established carriers had built upon the favorable positions they inherited as a result of grandfathering to such an extent that they could limit access to routes beginning or ending at any of the slot-controlled airports. We also reported that while the lottery was successful in placing slots in the hands of some entrants and smaller incumbents, the effect on entry over the long term was disappointing, in part because many of the lottery winners subsequently went out of business or merged with an established carrier. Recognizing the need for new entry at the slot-controlled airports, the Congress in 1994 created an exemption provision to allow additional slots for entry at O’Hare, LaGuardia, and Kennedy when DOT “finds it to be in the public interest and the circumstances to be exceptional.” In October 1996, we reported that the level of control over slots by a few established airlines had increased even further (see app. I). We found that the exemption authority, which in effect allows DOT to issue new slots, resulted in little new entry because DOT had interpreted the “exceptional circumstances” criterion very narrowly. DOT had approved applications only to provide service in markets not receiving nonstop service. We found no congressional guidance, however, to support this interpretation. As a result, little new entry occurred at these airports, which is crucial to establishing new service in the heavily traveled eastern and midwestern markets. In our 1990 report, we outlined the pros and cons of various policy options to promote airline competition. These options included keeping the Buy/Sell Rule but periodically withdrawing a portion of slots that were grandfathered to the major incumbents and reallocating them by lottery. Because the situation had continued to worsen, we recommended in our October 1996 report that DOT redistribute some of the grandfathered slots to increase competition, taking into account the investments made by those airlines at each of the slot-controlled airports. We also said that if DOT did not choose to do so, the Congress may wish to consider revising the legislative criteria that govern DOT’s exceptional circumstances provision so that DOT could consider competitive benefits as a key criterion in deciding whether or not to grant slots to new entrants. At LaGuardia and National airports, perimeter rules prohibit incoming and outgoing flights that exceed 1,500 and 1,250 miles, respectively. The perimeter rules were designed to promote Kennedy and Dulles airports as the long-haul airports for the New York and Washington metropolitan areas. However, the rules limit the ability of airlines based in the West to compete because those airlines are not allowed to serve LaGuardia and National airports from markets where they are strongest. By contrast, because of their proximity to LaGuardia and National, each of the seven largest established carriers is able to serve those airports from its principal hub. While the limit at LaGuardia was established by the Port Authority of New York & New Jersey, National’s perimeter rule is federal law. Thus, in our October 1996 report, we suggested that the Congress consider granting DOT the authority to allow exemptions to the perimeter rule at National when proposed service will substantially increase competition. We did not recommend that the rule be abolished because removing it could have unintended negative consequences, such as reducing the amount of service to smaller communities in the Northeast and Southeast. This could happen if major slot holders at National were to shift their service from smaller communities to take advantage of more profitable, longer-haul routes. As a result, we concluded that a more prudent course to increasing competition at National would be to examine proposed new services on a case-by-case basis. Our reports have also identified restrictive gate leases as another barrier to establishing new or expanded service at some airports. These leases permit an airline to hold exclusive rights to use most of an airport’s gates over a long period of time, commonly 20 years. Such long-term, exclusive-use gate leases prevent nonincumbents from securing necessary airport facilities on equal terms with incumbent airlines. To gain access to an airport in which most gates are exclusively leased, a nonincumbent must sublet gates from the incumbent airlines—often at nonpreferred times and at a higher cost than the incumbent pays. Since our 1990 report, some airports, such as Los Angeles International, have attempted to regain more control of their facilities by signing less restrictive, shorter-term leases once the exclusive-use leases expired. Nevertheless, our October 1996 report identified several airports in which entry was limited because most of the gates were under long-term, exclusive-use leases with one airline. Although the development, maintenance, and expansion of airport facilities is essentially a local responsibility, most airports are operated under federal restrictions that are tied to the receipt of federal grant money from FAA. In our 1990 report, we suggested that one way to alleviate the barrier created by exclusive-use gate leases would be for FAA to add a grant restriction that ensures that some gates at an airport would be available to nonincumbents. Because many airports have taken steps since then to sign less restrictive gate leases, we concluded in our 1996 report that such a broad grant restriction was not necessary. However, to address the remaining problem areas, we recommended that when disbursing airport improvement grant moneys, FAA give priority to those airports that do not lease the vast majority of their gates to one airline under long-term, exclusive-use terms. In response to our October 1996 report, DOT stated in January of this year that it shared our concerns that barriers to entry limit competition in the airline industry. The agency indicated that it would include competitive benefits as a factor when determining whether to grant slots to new entrants under the exceptional circumstances criterion. DOT also committed to giving careful consideration to our recommendation that it create a pool of available slots and periodically reallocate them, but that it might choose to pursue alternative means to enhancing competition. On October 3, 1997, DOT announced that it would soon publicly issue a number of initiatives aimed at enhancing competition. Two of those initiatives related to identified problems: providing access to high-density airports through slot exemptions and investigating allegations of anticompetitive behavior. As of mid-October, DOT had 174 requests for slot exemptions, most of which were for slots at O’Hare and LaGuardia airports. On Friday, October 24, 1997, DOT issued its decision on some of the requests for slot exemptions and set forth its new policy on slot exemptions, which has been expanded to take into account the need for increased competition at the slot controlled airports. Because some in government and academia believe that slots at some airports may be underutilized, DOT is also evaluating how effectively slots are being used at these airports. Finally, DOT has expressed concern about potentially over-aggressive attempts by some established carriers to thwart new entry. According to DOT, over the past 16 months, there has been an increasing number of allegations of anticompetitive practices, such as predatory conduct, aimed at new competition, particularly at major network hubs. DOT is formulating a policy that will more clearly delineate what is acceptable and unacceptable behavior in the area of competition between major carriers at their hubs and smaller, low-cost competitors. This policy is to indicate those factors DOT will consider in pursuing remedies through formal enforcement actions. The proposed Aviation Competition Enhancement Act of 1997 has been drafted to promote domestic competition. The legislation targets three of the barriers to competition: slot controls, the perimeter rule, and predatory behavior by air carriers. The bill would create a mechanism by which DOT would increase access to the slot-controlled airports. Under the draft legislation, where slots are not available from DOT, the Department would be required to periodically withdraw a small portion of the slots that were grandfathered to incumbent airlines and reallocate them among new entrant and limited incumbent air carriers. Slots would not be withdrawn if they were already being used to serve certain small or medium-sized airports. This provision of the proposed bill is consistent with the spirit of our recommendation on slots and provides a good starting point for the debate about how such a process should be used and its potential impact. Our recommendation recognized the sensitivities with withdrawing and reallocating slots from one airline to another by stating that such a process should take into account the investments made by the established airlines. The proposed bill does not specify details about how DOT should implement this process. Because of the sensitivities in making any reallocations, DOT would need to carefully consider balancing the goals of increasing competition with fair treatment of affected parties. The bill also addresses the perimeter rule by requiring the Secretary of Transportation to grant exemptions to the existing 1,250 mile limit at Washington National Airport under certain circumstances. There are legitimate concerns about whether or not exemptions to the rule would negatively affect the noise, congestion, and safety at Washington National, as well as air service to and from different communities within the perimeter. The bill addresses these concerns by specifying that only stage 3 aircraft (aircraft that meet FAA’s most stringent noise standards) can be used and that exemptions would not be allowed to affect the number of hourly commercial operations at National Airport. The bill further specifies that the Secretary certify that whenever exemptions to the rule are granted, noise, congestion, and safety will not deteriorate relative to their 1997 levels. The Secretary must similarly certify that air service to communities within the existing perimeter will not worsen. Finally, the bill also contains a provision intended to limit the time that DOT has to respond to complaints of predatory behavior. As we noted previously, because of its concerns in this area, DOT plans to announce a policy that will more clearly delineate the factors it will consider in pursuing remedies through formal enforcement actions. Because a variety of factors has contributed to higher fares and poorer service that some small and medium-sized communities in the East and upper Midwest have experienced since deregulation, a coordinated effort involving federal, regional, local, and private-sector initiatives may be needed. In addition to DOT’s planned actions and the proposed legislation, several public and private initiatives that are currently under way, as well as other potential options, are discussed below. If successful, these initiatives would complement, and potentially encourage, the increasing use of small jets by the commuter affiliates of established airlines—a trend that has the potential for increasing competition and improving the quality of service for some communities. Recognizing that federal actions alone would not remedy their regions’ air service problems, several airport directors and community chamber of commerce officials in the Southeast and Appalachian regions recently initiated a coordinated effort to improve air service in their regions. As a result of this effort, several members of Congress from the Southeast and Appalachian regions in turn organized a bipartisan caucus named “Special Places of Kindred Economic Situation” (SPOKES). Among other things, SPOKES is designed to ensure sustained consumer education and coordinate federal, state, local, and private efforts to address the air service problems of communities adversely affected since deregulation. Two SPOKES-led initiatives under way include establishing and developing a Website on the Internet and convening periodic “national air service roundtables” to bring together federal, state, and local officials and airline, airport, and business representatives to explore potential solutions to air service problems. On February 7, 1997, the first roundtable was held in Chattanooga. A key conclusion of the February 1997 roundtable was that greater regional, state, and local efforts were needed to promote economic growth and attract established and new airlines alike to serve small and medium-sized markets in the East and upper Midwest. Suggested initiatives included (1) creating regional trade associations composed of state and local officials, airport directors, and business executives; (2) offering local financial incentives to nonincumbents, such as guaranteeing a specified amount of revenue or providing promotional support; and (3) communities’ aggressive marketing efforts to airlines to spur economic growth. To grow and prosper, businesses need convenient, affordable air service. As a result, businesses located in the affected communities have increasingly attempted to address their communities’ air service problems. Perhaps the most visible of these efforts has been the formation of the Business Travel Contractors Corporation (BTCC) by 45 corporations, including Chrysler Motors, Procter & Gamble, and Black & Decker. These corporations formed BTCC because they were concerned about the high fares they were paying in markets dominated by one established airline. BTCC held national conferences in Washington, D.C., in April and October 1997 to examine this problem and explore potential market-based initiatives. At BTCC’s October conference, attendees endorsed the concepts of (1) holding periodic slot lotteries to provide new entrant carriers with access to slot controlled airports, (2) allowing new entrants and other small carriers to serve points beyond Washington National’s perimeter rule, and (3) requiring DOT to issue a policy addressing anticompetitive practices, and specifying the time frames within which all complaints will be acted upon. In addition to public and private-sector initiatives, the increasing use of 50- to 70-seat regional jets is improving the quality of air service for a growing number of communities. Responding to consumers’ preference to fly jets rather than turboprops for greater comfort, convenience, and a perceived higher level of safety, commuter affiliates of established airlines are increasingly using regional jets to (1) replace turboprops on routes between established airlines’ hubs and small and medium-sized communities and (2) initiate nonstop service on routes that are either uneconomical or too great a distance for commuter carriers to serve with slower, higher-cost, and shorter-range turboprops. Because regional jets can generally fly several hundred miles farther than turboprops, commuter carriers will be able to link more cities to established airlines’ hubs. To the extent that this occurs, it could increase competition in many small and medium-sized communities by providing consumers with more service options. Mr. Chairman, this concludes our prepared statement. We would be glad to respond to any questions that you or any member of the Subcommittee may have. Airline Deregulation: Addressing the Air Service Problems of Some Communities (GAO/T-RCED-97-187, June 25, 1997). Domestic Aviation: Barriers to Entry Continue to Limit Benefits of Airline Deregulation (GAO/T-RCED-97-120, May 13, 1997). Airline Deregulation: Barriers to Entry Continue to Limit Competition in Several Key Domestic Markets (GAO/RCED-97-4, Oct. 18, 1996). Changes in Airfares, Service, and Safety Since Airline Deregulation (GAO/T-RCED-96-126, Apr. 25, 1996). Airline Deregulation: Changes in Airfares, Service, and Safety at Small, Medium-Sized, and Large Communities (GAO/RCED-96-79, Apr. 19, 1996). Airline Competition: Essential Air Service Slots at O’Hare International Airport (GAO/RCED-94-118FS, Mar. 4, 1994). Airline Competition: Higher Fares and Less Competition Continue at Concentrated Airports (GAO/RCED-93-171, July 15, 1993). Airline Competition: Options for Addressing Financial and Competition Problems, Testimony Before the National Commission to Ensure a Strong Competitive Airline Industry (GAO/T-RCED-93-52, June 1, 1993). Computer Reservation Systems: Action Needed to Better Monitor the CRS Industry and Eliminate CRS Biases (GAO/RCED-92-130, Mar. 20, 1992). Airline Competition: Effects of Airline Market Concentration and Barriers to Entry on Airfares (GAO/RCED-91-101, Apr. 26, 1991). Airline Competition: Weak Financial Structure Threatens Competition (GAO/RCED-91-110, Apr. 15, 1991). Airline Competition: Fares and Concentration at Small-City Airports (GAO/RCED-91-51, Jan. 18, 1991). Airline Deregulation: Trends in Airfares at Airports in Small and Medium-Sized Communities (GAO/RCED-91-13, Nov. 8, 1990). Airline Competition: Industry Operating and Marketing Practices Limit Market Entry (GAO/RCED-90-147, Aug. 29, 1990). Airline Competition: Higher Fares and Reduced Competition at Concentrated Airports (GAO/RCED-90-102, July 11, 1990). Airline Deregulation: Barriers to Competition in the Airline Industry (GAO/T-RCED-89-65, Sept. 20, 1989). Airline Competition: DOT’s Implementation of Airline Regulatory Authority (GAO/RCED-89-93, June 28, 1989). Airline Service: Changes at Major Montana Airports Since Deregulation (GAO/RCED-89-141FS, May 24, 1989). Airline Competition: Fare and Service Changes at St. Louis Since the TWA-Ozark Merger (GAO/RCED-88-217BR, Sept. 21, 1988). Competition in the Airline Computerized Reservation Systems (GAO/T-RCED-88-62, Sept. 14, 1988). Airline Competition: Impact of Computerized Reservation Systems (GAO/RCED-86-74, May 9, 1986). Airline Takeoff and Landing Slots: Department of Transportation’s Slot Allocation Rule (GAO/RCED-86-92, Jan. 31, 1986). Deregulation: Increased Competition Is Making Airlines More Efficient and Responsive to Consumers (GAO/RCED-86-26, Nov. 6, 1985). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the barriers that limit aviation competition, focusing on: (1) the actions the Department of Transportation (DOT) has taken to address those barriers; and (2) how the Aviation Competition Enhancement Act of 1997 and other initiatives seek to address those problems. GAO noted that: (1) a combination of factors continues to limit entry at airports serving small and medium-sized communities in the East and upper Midwest; (2) these factors include the dominance of routes to and from those airports by one or two traditional hub-and-spoke airlines and operating barriers, such as slot controls and long-term exclusive-use gate leases at hub airports; (3) in contrast, the more wide-spread entry of new airlines at airports in the West and Southwest since deregulation--and the resulting geographic differences in fare and service trends--has stemmed largely from the greater economic growth in those regions as well as from the absence of dominant market positions of incumbent airlines and barriers to entry; (4) GAO has found that little progress has been achieved in lowering the barriers to entry since GAO first reported on them in 1990; (5) slot controls continue to block entry at key airports in the East and upper Midwest; (6) GAO recommended that DOT take actions to promote competition in regions that have not experienced lower fares as a result of airline deregulation by creating a pool of available slots by periodically withdrawing some grandfathered slots from the major incumbents and redistributing them in a fashion that increases competition; (7) moreover, GAO suggested that, absent action by DOT, Congress may wish to consider revising the legislative criteria that govern DOT's granting slots to new entrants; (8) GAO also suggested that Congress consider granting DOT the authority to allow exemptions on a case-by-case basis to the perimeter rule at National Airport when the proposed service will substantially increase competition; (9) in response to GAO's recommendations, DOT indicated that it would revise its restrictive interpretation of the legislative criteria governing the granting of new slots; (10) on October 24, 1997, DOT announced its decision on some of the pending requests for slot exemptions; (11) DOT also is evaluating how effectively slots are being used and it is formalizing a policy that will identify anticompetitive behavior as a precursor for formal enforcement action; (12) the proposed Aviation Competition Enhancement Act of 1997 addresses three barriers to competition: slot controls, the perimeter rule, and predatory behavior by air barriers; and (13) increasing competition and improving air service at airports serving small and medium-sized communities that have not benefited from fare reductions and/or improved service since deregulation will entail a range of federal, regional, local, and private-sector initiatives. |
The SBIR program was initiated in 1982 and has four main purposes: (1) stimulate technological innovation, (2) use small businesses to meet federal R&D needs, (3) encourage participation in technological innovation by small businesses owned by women and disadvantaged individuals, and (4) increase commercialization of innovations derived from federal R&D efforts. The purpose of the STTR program—initiated about a decade later in 1992—is to stimulate a partnership of ideas and technologies between innovative small businesses and research institutions through federally funded R&D. Legislation enacted in 2011 reauthorized the programs from fiscal year 2012 through fiscal year 2017. The Small Business Act requires agencies to spend a certain percentage on programs each year. The spending requirements for SBIR and STTR are to be calculated as a percentage of each agency’s extramural R&D obligations, provided their extramural R&D obligations exceed the participation thresholds of $100 million for SBIR and $1 billion for STTR. Under the 2011 reauthorization, the SBIR extramural spending requirement was set at 2.7 percent for fiscal year 2013 and will increase incrementally to 3.2 percent of extramural R&D obligations by fiscal year 2017, and the STTR allocation was set at 0.35 percent for fiscal year 2013 and will increase incrementally to 0.45 percent by fiscal year 2017. The SBIR and STTR programs each include the following three phases: In phase I, agencies make awards to small businesses to determine the scientific and technical merit and feasibility of ideas that appear to have commercial potential. Phase I awards normally do not exceed $150,000. For SBIR, phase I awards generally last 6 to 9 months. For STTR, these awards generally last 1 year. In phase II, small businesses with phase I projects that demonstrate scientific and technical merit and feasibility, in addition to commercial potential, may compete for awards of up to $1 million to continue the R&D for an additional period, normally not to exceed 2 years. Phase III is for small businesses to pursue commercialization of technology developed in prior phases. Phase III work derives from, extends, or completes an effort made under prior phases, but it is funded by sources other than the SBIR or STTR programs. In this phase, small businesses are expected to raise additional funds from private investors, the capital markets, or from funding sources within the agency that made the initial award other than its SBIR or STTR program. While SBIR or STTR funding cannot be used for phase III, agencies can participate in phase III by, for example, purchasing the technology developed in prior phases. SBA’s Office of Investment and Innovation is responsible for overseeing and coordinating the participating agencies’ efforts for the SBIR and STTR programs. As part of SBA’s oversight and coordination role, the agency has issued SBIR and STTR policy directives to explain and outline requirements for agencies’ implementation of these programs. The policy directives include a list of the data that agencies must submit to SBA annually—such as their extramural R&D obligations amount and the amount obligated for awards for the programs. Each participating agency must administer its SBIR and STTR programs in accordance with program laws, regulations, and the policy directives issued by SBA. In general, the programs are similar across agencies. All of the agencies follow the same general process to obtain proposals from and make awards to small businesses for both the SBIR and STTR programs. However, each participating agency has considerable flexibility to design and manage the specifics of their programs, such as determining research topics, selecting award recipients, and administering funding agreements. At least annually, each participating agency issues a solicitation requesting proposals for projects in topic areas determined by the agency. Each agency uses its own process to review proposals and determine which proposals should receive awards. For those agencies that have both SBIR and STTR programs, agencies usually use the same process for both programs. Also, each agency determines whether the funding for awards will be provided as grants or contracts. According to an agency program administrator, agencies such as the Department of Defense (DOD), the Department of Homeland Security (DHS), and the National Aeronautics and Space Administration (NASA) typically issue contracts that address highly focused topics and include a number of requirements that small business must comply with, while agencies like the Department of Energy (DOE) and the National Science Foundation (NSF) often issue grants for less specified topics that allow for more flexibility. SBA cannot fully determine if all 11 agencies met their spending requirements for fiscal year 2013, as 9 of the 11 participating agencies did not follow SBA’s guidance in submitting data on their total extramural R&D obligations. Nevertheless, data the agencies submitted to SBA indicate that most agencies complied with their SBIR and STTR spending requirements for fiscal year 2013. SBA cannot fully determine whether the participating agencies complied with their fiscal year 2013 spending requirements using the data that agencies submitted to SBA because 9 of the 11 agencies provided incorrect data. The Small Business Act requires agencies to calculate their spending requirements based on their extramural R&D budget, but it defines the extramural R&D budget as the actual obligations over the course of the year—which are not fully known until the end of the year— rather than the amount that agencies propose to spend on the program early in the fiscal year. For years before fiscal year 2013, most agencies provided the amount that they proposed to spend on extramural R&D and not the amount they actually obligated in their data submitted to SBA after the end of the fiscal year. SBA issued a revised template for data submission for fiscal year 2013 to clarify what information was needed to calculate spending requirements and directed agencies to submit data on their extramural R&D obligations. SBA officials said that they changed the template for data submission to respond to our past recommendations to provide additional guidance to agencies about submitting data and calculating spending requirements. SBA provided agency program managers with new guidance describing how to submit the relevant data and information to SBA. In addition, SBA officials told us that they discussed this issue at length with the agencies. However, SBA’s efforts did not fully address the problem, as NASA and the Department of Health and Human Services (HHS) are the only agencies that submitted data on extramural R&D obligations to SBA for fiscal year 2013, as requested by SBA in accordance with the law. The remaining nine agencies submitted incorrect data by providing their extramural R&D budget estimates. Program officials said that the requirement to use extramural R&D obligations rather than extramural R&D budget makes it difficult for agencies to comply with spending requirements because extramural R&D obligations are not known until the end of the fiscal year. Several program managers told us that they believe it is unfair or impractical to hold their agency to a target that is not known until the end of the year, when it is not possible to obligate additional money. In addition, some program officials told us that they do not have systems in place to easily calculate extramural R&D obligations. For example, DOD officials said that it would likely not be possible to determine a final extramural R&D obligations figure until 6 months after the end of the fiscal year. Furthermore, some agency officials told us that their agency does not calculate total extramural R&D obligations. For example, officials at the Environmental Protection Agency (EPA) said that their financial system did not provide the level of detail necessary to calculate extramural R&D obligations, and modifying the current system would require a level of effort beyond what is justified for an agency with a small extramural R&D budget. Nevertheless, the Small Business Act requires agencies to use extramural R&D obligations to calculate their annual spending requirements. Moreover, SBA’s guidance for fiscal year 2013 continues to direct agencies to use this approach, and SBA officials told us that the best way to ensure compliance with spending requirements is to use obligations. SBA’s ability to conduct an accurate assessment of whether agencies are complying with spending requirements is dependent on agencies submitting the correct data. For example, during fiscal year 2013, NASA estimated its extramural R&D budget to be about $4.9 billion, and it developed a spending plan for the SBIR and STTR programs based on this budget. However, according to data submitted to SBA, NASA’s end- of-year extramural R&D obligations totaled about $5.2 billion, causing NASA’s actual spending requirements for the programs to be higher than anticipated at the beginning of the year, as shown in table 2. NASA spent $132.5 million on its SBIR program, which would have been enough to comply with its estimated spending requirement, but it was less than the actual spending requirement. NASA officials told us that they did not know what the final extramural R&D obligations would be until after the end of the fiscal year and, therefore, were unable to spend more to meet the higher-than-anticipated spending requirements. Consistent with our findings for fiscal year 2012, this increase in extramural R&D obligations compared with the budget contributed to NASA’s noncompliance with SBIR spending requirements in fiscal year 2013, according to program officials. NASA submitted the correct data to SBA, and our analysis showed it did not meet the spending requirements in fiscal year 2013. However, had NASA provided SBA with its extramural R&D budget rather than extramural R&D obligations, as other agencies provided, NASA would have—incorrectly—appeared to have met its SBIR spending requirement. Other agencies that appeared to have met their spending requirement based on extramural R&D budget data may not actually have complied with their spending requirement if their extramural R&D obligations were higher than the amount they budgeted for the year. Conversely, an agency that appeared to spend less than the required amount on the programs could have actually met the spending requirements if the agency’s extramural R&D obligations at the end of the year were lower than the amount the agency budgeted at the beginning of the year. As discussed earlier, most agencies did not provide extramural R&D obligations data to SBA, which is a key piece of data for determining whether an agency met the spending requirements. SBA uses the data that agencies submit to determine the agencies’ compliance with spending requirements and reports this information to Congress as part of its annual report on the programs. However, without the correct data on the amount that agencies obligated for extramural R&D, SBA cannot fully determine agencies’ compliance with the spending requirements and cannot accurately report to Congress on their compliance. If SBA cannot fully determine agencies’ compliance based on data those agencies are submitting, then notifying Congress of this limitation would be important to help ensure that Congress receives critical information for overseeing these programs. Further, if SBA determines that calculating spending requirements based on extramural R&D obligations is not feasible, then developing a proposal for Congress to change the requirement could better position SBA and the agencies in determining requirements to help ensure that the intended benefits of these programs are being attained. While it was generally the wrong data, the data agencies did submit to SBA indicate that 9 of the 11 participating agencies met or exceeded their fiscal year 2013 spending requirements for the SBIR program, while the remaining 2 agencies did not meet the requirements. According to the agencies’ data, the 9 agencies that appeared to meet or exceed the requirements spent from 2.7 percent to 4.7 percent of their extramural R&D obligations for the program, and the remaining 2 agencies spent from 2.1 to 2.5 percent. In comparison, agency data indicated that 8 of the 11 agencies met or exceeded spending requirements in fiscal year 2012, 10 of the 11 agencies met or exceeded spending requirements in fiscal year 2011, and 3 of the 11 agencies met spending requirements each fiscal year from 2006 through 2011, as we found in our two prior reports. Figure 1 shows the percentage of extramural R&D obligations that agencies spent on the SBIR program, based on the data the agencies submitted to SBA. Appendix I provides additional detail. The data agencies submitted to SBA indicated that four of the five participating agencies met or exceeded their fiscal year 2013 spending requirements for the STTR program, while the remaining agency did not meet the requirements. According to the agencies’ data, the four agencies that complied with the requirements spent from 0.35 percent to 0.38 percent of their submitted extramural R&D obligations on their STTR programs, and the agency that did not comply spent 0.34 percent. In comparison, as we reported in the past, the data that agencies submitted to SBA indicated that two of the five agencies complied in fiscal year 2011 and 2012, and only one of the agencies complied with spending requirements each fiscal year from 2006 through 2012. Figure 2 shows the percentage of extramural R&D obligations that agencies spent on STTR, based on the data the agencies submitted to SBA. Appendix II provides additional detail. Program managers told us that one reason agencies did not comply with spending requirements in fiscal year 2013 is because they reserved program funds that were appropriated in fiscal year 2013 and plan to spend those funds in future years. Specifically, program officials from one agency that did not comply with SBIR spending requirements— Commerce—and the agency that did not comply with STTR spending requirements—DOD—told us that they reserved the required amount for the program and spent the remaining funds in fiscal year 2014. In addition, agency officials told us that extenuating circumstances, such as late appropriations and spikes in funding related to natural disasters, also affected their ability to meet annual spending requirements in fiscal year 2013. For example, Commerce officials said that in fiscal year 2013 the National Oceanic and Atmospheric Administration received additional funding from supplemental appropriations related to Super Storm Sandy late in the fiscal year—after the agency’s internal deadline for issuing new contracts—which kept them from obligating enough money to meet the spending requirement that increased due to the supplemental funding. They said that the money not obligated in fiscal year 2013 was obligated in fiscal year 2014. Consistent with the findings from our June 2014 report, some program managers said their agency did not meet spending requirements, but the officials said they will spend all of the funding that was budgeted for the programs before the funding expires. However, they did not spend the minimum required amount in fiscal year 2013 and, therefore, did not comply with the spending requirements. SBA officials agree that meeting the spending requirements for the SBIR and STTR programs requires agencies to spend at least the minimum required percentages on the programs each fiscal year. However, we found in our June 2014 report that SBA’s most recent SBIR policy directive states that agencies must reserve the minimum percentages for awards to small business, and we recommended that SBA revise its policy directives to correctly summarize the law. officials told us that they are planning to review and clarify the language in future policy directives, but, as the officials told us in January 2015, they disagree that the policy directive inaccurately summarizes the law. The officials told us that the inclusion of the word “reserve” in the policy directive does not lead agencies to reserve money for multiple years. However, we found that agencies continue to reserve money, and we continue to believe that a clarification is needed to help minimize this practice. GAO-14-431. Several agencies have also implemented certain practices designed to help ensure they meet their spending requirements. Examples of such practices include the following: Budgeting more than the minimum required amount for the program. Education officials told us that they budget more than the minimum amount they calculate as required for the SBIR program, which increases the likelihood that the agency will meet or exceed the spending requirement. In addition, if appropriations are higher than anticipated, the officials review their planned budget for the SBIR program and determine if the program budget should be increased. DHS officials told us that their agency also obligates additional funds beyond those budgeted for the SBIR program. For example, DHS officials said that the agency obligated nearly $2.8 million in addition to the amount originally budgeted for the program for fiscal year 2013. Tracking program obligations and centralizing funds. Officials from DOE and one component in DHS said that they review how much the agency has obligated for the programs each month to help ensure that they obligate all of the SBIR and STTR funds. DOE components also transfer funds directly to the centralized SBIR and STTR program office, making it easier for the program office to ensure that all funds are obligated. Allowing voluntary participation. DOD officials said that some components within the agency voluntarily participate in the SBIR program even though they are not required to by law because the components see benefit in the program. Including these other components increases the total amount obligated toward the program. Each of the agencies participating in the SBIR and STTR programs submitted the required reports describing the methodology used for calculating the amount of their extramural R&D budgets to SBA for fiscal year 2013, but agencies did not comply with all methodology reporting requirements. The act also requires SBA to include an analysis of the agencies’ methodology reports in its annual report to Congress. SBA has not yet issued its required report to Congress on the programs for fiscal year 2013, but the fiscal year 2012 report to Congress, which SBA submitted in November 2014, did not include the required analysis of agencies’ methodology reports. As we found for previous years in past reports, some agencies did not provide all the information required in their methodology reports for fiscal year 2013. As discussed in the SBIR policy directive, agencies are required to submit reports to SBA each year that itemize the programs excluded from their extramural R&D calculations and explain the reasons SBA also requested that agencies provide the dollar for the exclusions.amounts of the programs excluded from their extramural R&D. For fiscal year 2013, all 11 agencies submitted a methodology report to SBA. However, three agencies did not itemize the specific programs they excluded from their extramural R&D, or did not explain the reasons for the exclusions from their calculations of extramural R&D, or both. Specifically, DOD, EPA, and NSF either did not itemize the specific programs that they excluded, did not explain the reasons why they excluded the programs, or both. Two of these three agencies’ methodology reports—DOD and EPA—included general categories of exclusions but did not itemize the programs that were excluded. For example, DOD’s fiscal year 2013 report stated that some of its programs were exempted by the Small Business Act, which exempts programs in the intelligence community. However, DOD’s report did not itemize the specific programs or subunits that were excluded, as required by the policy directives. In addition, four agencies did not indicate to SBA whether they had exclusions for some or all of their programs. Specifically, the Departments of Agriculture (USDA), Education, and Commerce, and HHS did not indicate in their methodology reports to SBA whether they had exclusions for some or all of their programs. Agencies are not explicitly required to state if they have no exclusions, but without that information it will be difficult for SBA to determine whether these agencies may have had exclusions that were not included in their reports to SBA. Two of the participating agencies— the Department of Transportation (DOT) and DHS—provided the dollar amounts associated with each of their exclusions, consistent with SBA guidance. Additionally, 10 of the 11 agencies submitted their methodology reports for calculating extramural R&D to SBA later than the date required in the Small Business Act. According to the Small Business Act, agencies must submit their methodology reports to SBA within 4 months of enactment of their annual appropriations. Fiscal year 2013 appropriations for each of the participating agencies were enacted in March 2013, so the methodology reports were due in July 2013. However, for fiscal year 2013, nine of the agencies provided their methodology reports to SBA as a part of their annual data submissions to SBA, which were generally submitted to SBA from June through September 2014—about a year after the deadline for the methodology reports. One agency, HHS, met the deadline by submitting its methodology report to SBA in July 2013. Most agency officials told us that they submitted their methodology reports late because SBA did not request the reports at an earlier date. Officials from 9 of the 10 agencies that submitted their reports late said that they could have provided the reports to SBA within 4 months of their appropriation if SBA had requested them. SBA is not required to request the reports from agencies, and SBA officials told us that they did not request the methodology reports for fiscal year 2013 sooner because they were focused on updating the template that the agencies used to submit program data to SBA. The agencies’ late submission of the methodology reports makes it difficult for SBA to promptly analyze their methodologies and provide agencies with timely feedback to assist them in accurately calculating their spending requirements. Without such review and feedback, agencies may be calculating their extramural R&D incorrectly, which could lead to agencies spending less than the required amounts on the programs. We previously recommended in June 2014 that SBA request that the agencies submit their methodology reports within 4 months of the enactment of appropriations, as required by the Small Business Act and the program policy directives, and SBA agreed with the recommendation. However, SBA has not yet taken action to address this recommendation. We continue to believe this recommendation has merit and should be fully implemented. Doing so could better position SBA to analyze agencies’ methodologies and provide timely feedback. SBA has not issued its report to Congress on the programs for fiscal year 2013. The Small Business Act requires SBA to report to certain congressional committees on the SBIR and STTR programs not less than annually, but the act does not specify a date that the report is due. In October 2014, SBA officials told us that they had recently begun reviewing the agencies’ annual data submissions for fiscal year 2013 and anticipated that it would take 6 to 9 months to complete their report and submit it to Congress. Officials said that their review of the data submissions was delayed because of changes SBA had made to the data submission template, which prompted SBA to extend the reporting date for agencies from March 2014, as required by SBA’s policy directives, to June 2014. We previously concluded in September 2013 that, without more rigorous oversight by SBA, and more timely and detailed reporting on the part of both SBA and participating agencies, it would be difficult for SBA to ensure that intended benefits of these programs are being attained and that Congress was receiving critical information to oversee these programs. In our September 2013 report, we recommended that SBA provide Congress with a timely annual report that includes a comprehensive analysis of the methodology each agency used for calculating the SBIR and STTR spending requirements.with our recommendation and stated in its comments on that report that it planned to implement the recommendation. Although SBA officials told us in January 2015 that they were still in the process of verifying the data they had requested from agencies for SBA’s fiscal year 2013 report to SBA agreed Congress, they said that that they plan to provide a more timely report to Congress in the future. The information in the agencies’ methodology reports may not be adequate for SBA to provide Congress with its required analysis of how agencies calculated their extramural R&D for fiscal year 2013. The Small Business Act requires agencies to submit a report to SBA describing the methodology used for calculating the amount of their extramural R&D budgets. The act also requires SBA to include an analysis of the agencies’ methodology reports in its annual report to Congress. The fiscal year 2012 report to Congress, which SBA submitted to Congress in November 2014 and is the most recent report available, did not include the required analysis of agencies’ methodology reports. SBA officials told us that they did not provide clear guidance to the agencies about the information to submit to SBA in their fiscal year 2012 methodology reports and, therefore, it could be unclear how agencies calculated their extramural R&D. For fiscal year 2013, SBA provided agencies with additional guidance requesting the identification of all R&D programs excluded from the determination of extramural R&D and the dollar amounts of those programs, but they have not yet assessed whether the information is adequate to determine whether agencies are calculating their extramural R&D correctly. Currently, each of the agencies submits methodology reports of varying detail, with some providing limited information on how they calculated their extramural R&D budgets that make it difficult for SBA to determine how the agencies are calculating their extramural R&D. For example, one agency submitted a methodology report that states that agency budget officers estimate funds available for extramural R&D and use a formula to calculate the spending requirement. The methodology report does not provide additional information on how the budget officers made the calculation. Program officials from this agency told us that they need SBA to tell them whether additional information is needed in the methodology report before they can obtain it from their budget office. Without assessing whether the information it collects is adequate to analyze agencies’ methodology reports, SBA cannot provide Congress with an accurate analysis of how agencies calculate their extramural R&D. In January 2015, SBA officials told us that they had begun to analyze the methodology reports for fiscal year 2013 and said that they plan to include an analysis of the methodology reports in the fiscal year 2013 report, but the officials did not provide GAO with any documentation of the analysis and did not discuss their preliminary findings. In addition, officials from most of the agencies told us that their methodologies for determining their extramural obligations have not changed for years, and they report similar information to SBA every year. We previously concluded in September 2013 that, without guidance from SBA, participating agencies are likely to continue to provide SBA with broad, incomplete, or inconsistent information on their methodologies for calculating their extramural R&D, and we recommended that SBA provide timely annual feedback to each agency following the submission of its methodology report. SBA agreed with our recommendation and stated in its comments on our report that it planned to implement the recommendation but, as of January 2015, it has not provided agencies substantive feedback on their fiscal year 2013 methodology reports. We continue to believe our recommendation has merit and should be fully implemented. Potential effects of changing the methodology to calculate the SBIR and STTR spending requirements based on each agency’s total R&D budget instead of its extramural R&D obligations include an increase in the amount of each agency’s spending requirement—for some agencies more than others—and an increase in the number of agencies required to participate. Agency officials identified several benefits and drawbacks that changing the calculation methodology could have on their agencies’ SBIR and STTR programs. Changing the methodology for determining SBIR and STTR spending requirements to use an agency’s total R&D budget rather than its extramural R&D obligations could increase spending requirements. For example, if the spending requirements were calculated based on an agency’s total R&D budget rather than its extramural R&D obligations using the same percentages and participation thresholds defined in current law, total spending requirements in fiscal year 2013 would have increased from $2.3 billion to $3.9 billion, an increase of roughly $1.6 billion or 70 percent, according to our analysis of budget data and data submitted to SBA. This increase would have occurred both because agencies that currently participate would be required to spend more on the programs—because an agency’s total R&D budget is larger than its extramural R&D budget—and because additional agencies would be required to participate. Figure 3 shows the effects of changing spending requirements at each agency from current law, which is based on a percentage of extramural R&D obligations, to an alternative scenario that applies the same percentages to total R&D budgets. These effects are consistent with our findings in previous reports on these issues. As shown in figure 3, some agencies’ spending requirements would increase more than others under the alternative scenario. This variation is due primarily to differences in the relative proportions of the agencies’ extramural and intramural R&D obligations, but also affected by the inclusion of programs in total R&D that were excluded from extramural R&D by statute. Agencies that fund primarily extramural research would see smaller increases to their spending requirements under the alternative scenario, while agencies that fund more intramural research would see larger increases in their spending requirements, a finding consistent with those of our previous reports. Examples are as follows: NSF used more than 95 percent of its total R&D budget to fund extramural research in fiscal year 2013 and was required, based on data submitted to SBA, to spend $131.7 million on its SBIR program that year. Under the alternative scenario, NSF’s SBIR spending requirement would have been $133.6 million, an increase of about 1 percent. The Department of Commerce, on the other hand, used more than 20 percent of its total R&D budget to fund extramural R&D in fiscal year 2013 and was required to spend about $7 million on its SBIR program in that year. Under the alternative scenario, Commerce’s spending requirement would have more than quadrupled to $30.9 million. Furthermore, assuming that the thresholds for participating in the program did not change, this scenario would have required Commerce to spend $4 million on a new STTR program in fiscal year 2013. Consequently, the alternative scenario would have required Commerce to spend an additional $27.9 million on SBIR and STTR programs in fiscal year 2013, an increase of about 400 percent. As noted above, changing the calculation methodology from basing the spending requirement on extramural R&D obligations to total R&D budget would also require additional agencies to participate in SBIR and STTR, assuming that the dollar thresholds for participation remain the same. Two additional agencies—the Departments of Veterans Affairs and the Interior—would have been required to participate in SBIR during fiscal year 2013 under the alternative scenario. Adding these agencies to the SBIR program would have increased total federal SBIR spending requirements by $52.5 million, in addition to the $1.3 billion increase in spending requirements at the 11 agencies that currently participate in the SBIR program. Likewise, three additional agencies—USDA and the Departments of Commerce and Veterans Affairs—would have been required to participate in STTR under the alternative scenario. Adding these three agencies to the STTR program would have increased total federal STTR spending requirements by $15.2 million, in addition to the spending requirement increases of $163 million at the five agencies that currently participate in STTR. Basing the SBIR and STTR spending requirements on an agency’s total R&D budget, and applying a lower percentage than under current law, could result in a total federal commitment to the programs that is similar to what would result under current law. However, such a scenario would lower spending requirements at some agencies and raise them at others. As shown in figure 4, if the percentage applied to an agency’s total R&D budget had been 1.6 percent for SBIR and 0.2 percent for STTR in fiscal year 2013, and the thresholds for participating had remained the same, total required federal spending on the programs would be similar to required federal spending under current law. Using these lower percentages, spending requirements would have increased at agencies that primarily fund intramural research, such as EPA or the Department of Commerce. In contrast, spending requirements would have decreased at agencies, such as HHS and NSF, which primarily fund extramural research. In this scenario, spending requirement reductions, including $175.8 million at HHS and $59.7 million at NSF, were large enough to offset increases in spending requirements at other agencies. As we found in our previous review of these programs, agencies identified several potential benefits and drawbacks to changing the calculation methodology for their SBIR and STTR spending requirements from extramural R&D obligations to total R&D budget. For example, several program managers said that basing the SBIR and STTR spending requirements on total R&D would reduce the complexity of calculating spending requirements since agencies would no longer have to identify the extramural portion of their total R&D budgets. Some agency officials said that extramural R&D obligations are not calculated for any purpose beyond determining SBIR and STTR spending requirements. DOD program managers also told us that changing the calculation method would significantly simplify administration of their program. Currently, DOD’s program managers receive funding for SBIR and STTR from the comptrollers of all 3 military departments and about 21 other components that conduct R&D. According to DOD program officials, receiving money from all of these components can take months. If the spending requirements were calculated based on total R&D budgets, DOD program officials said that the SBIR program could receive funding from a single comptroller, which would allow DOD to make awards faster and better align SBIR and STTR awards with DOD-wide priorities. Program officials also identified potential drawbacks to changing the methodology, as we found in our previous reviews. In particular, several program managers said that increasing the amount of money that goes to SBIR could potentially reduce the amount of resources directed toward intramural research and extramural research outside of the SBIR and STTR programs. Program managers at one agency told us that the goals of extramural and intramural R&D spending are different since spending on intramural R&D is driven directly by an agency’s mission, while spending on extramural R&D provides special expertise that may not exist in the agency. These officials said that shifting money away from intramural R&D to small businesses that may not have the necessary expertise could diminish an agency’s ability to address specialized areas of R&D. Furthermore, program officials at another agency said they did not think it would make sense to calculate spending requirements for SBIR and STTR based on total R&D budgets since small businesses are only involved in extramural R&D, and total R&D budgets include funding for both intramural and extramural R&D. These officials said that structuring the spending requirements in this way could result in intramural research programs paying to support research at small businesses that does not directly benefit the programs, depending on how agencies decide to implement the change. In addition, some program managers raised concerns that a significant increase in the funding for the programs could be a challenge in the short term because they do not currently receive enough quality applications to meet the potential increased spending requirements, and it could take several years before they would have enough quality applications to meet the new spending requirements. Little is known about total administrative spending for fiscal year 2013 because the agencies that participate in the SBIR and STTR programs are not required to and do not fully track these costs. Agencies participating in the administrative pilot program reported spending $12.3 million on various new administrative and oversight activities in fiscal year 2013, but this amount does not represent total administrative spending. Furthermore, officials at some agencies expressed concern about the temporary nature of the pilot. Little is known about the total amount that agencies spent to administer their SBIR and STTR programs for fiscal year 2013 because the agencies are not required to and do not fully track these costs. For example, officials we interviewed told us that they do not have systems in place to accurately track the cost of all personnel who participate in the SBIR and STTR programs on a part-time basis, such as those who review applications or monitor contracts. Officials at three agencies said that tracking total administrative costs for the SBIR and STTR programs would require that they develop a more accurate time accounting system with codes for the programs. In response to our requests for data on their fiscal year 2013 administrative costs, most agencies provided information on some categories of administrative costs and partial estimates of costs. We received estimates for administrative costs from 9 of the 11 agencies participating in the programs. These estimates ranged from about $388,000 to $27 million. As with the data for fiscal years 2011 and 2012 provided for our previous reports, these data were incomplete and unverifiable. Six agencies—DOD, DOE, HHS, NSF, USDA, and the Department of Transportation—participated in the administrative pilot program in fiscal year 2013, and these agencies reported spending $12.3 million on administrative and oversight activities as part of the program. Under the 2011 reauthorization of the SBIR and STTR programs, agencies could spend up to 3 percent of SBIR funds on program administration and similar costs beginning in fiscal year 2013. According to the programs’ policy directives, funding for the pilot program cannot replace current agency administrative funding. SBA’s policy directives require each agency to submit a work plan to SBA that includes, among other information, a prioritized list of initiatives, the estimated amounts to be spent on each initiative, and the expected results to be achieved. The policy directives require SBA to evaluate the work plan and provide initial comments within 15 calendar days of receipt of the plan. If SBA does not provide initial comments within 30 calendar days of receipt of the plan, the work plan is deemed approved. SBA is supposed to use the information to report on the pilot program to Congress. Program officials at four agencies that participated in the pilot program said they used funding from the administrative pilot program to, for example, hire new staff; conduct outreach to previously underserved populations such as minority-owned small businesses; take steps to reduce fraud, waste, and abuse; and upgrade internal data systems. Of the five agencies that chose not to participate in the administrative pilot program—Commerce, Education, EPA, DHS, and NASA—three said that participation in the program would take money away from making awards to small businesses. Another agency that chose not to participate, DHS, submitted a program proposal to SBA for the administrative pilot program, but agency officials told us that they decided not to participate because internal policies kept them from hiring a dedicated contracting officer/specialist, and a conference they planned to attend to conduct outreach to underserved communities was cancelled. Program officials at most of the agencies that participated in the pilot program told us that SBA’s approval of the work plans after the fiscal year started or late appropriations from Congress contributed to agencies spending less than they planned to spend in fiscal year 2013. In fiscal year 2013, agencies estimated that they would spend $58.2 million on the administrative pilot program, but our analysis of the agencies’ work plans and data provided to SBA shows that agencies obligated $12.3 million or 21 percent of the proposed amount (see table 3). Of the six agencies that participated, DOE obligated most of what it estimated it would spend. One program manager said that the small amount of money the agency spent through the administrative pilot program in fiscal year 2013 is somewhat misleading, as many of the agency’s activities were just getting started, and agency officials expected to spend more in the future on allowable administrative pilot activities. Another agency official told us that the agency did not participate in the administrative pilot program in fiscal year 2013 because the agency drafted its plan late in the fiscal year and could not make planned expenditures, but that the agency participated in the pilot program in fiscal year 2014. Finally, officials at one agency raised concerns that it was challenging to find new activities, in part because the agency was already doing or had recently done some things that they wanted to fund. For example, program managers for one agency told us that they wanted to use funds from the administrative pilot program to reinstate in-person award review panels, which were discontinued due to lack of funding. However, SBA refused to allow it, stating that funds from the administrative pilot program could only be used to support new activities. The agency officials said that they understand SBA’s reasoning but believed that restoring discontinued administrative activities should be considered new activities and allowed. In fiscal year 2013, SBA requested that agencies submit data on the total amount spent on the administrative pilot program, but it did not request agencies to submit information on how they used the funds. The 2011 reauthorization of the programs requires SBA to provide Congress with a report on the use of administrative pilot program funds.was the first year of the pilot program, and SBA officials said they were still determining the information they needed to report to Congress. SBA officials told us that they did not have information about how agencies used their funds for the administrative pilot program and acknowledged Fiscal year 2013 that it would be useful to have. A list of the activities agencies initiated and the costs of each of these activities is one way that SBA could obtain further information on the use of funds. In response to our questions, SBA officials sent an e-mail to the 11 participating agencies requesting that they provide SBA with a summary of how funds for the administrative pilot program were used. In March 2015, the SBA officials said that they were in the process of receiving and clarifying the agency responses. Ten of the agencies told us that they could provide the information to SBA if requested. Because SBA has not required agencies to submit information on how they used the funds, SBA cannot undertake a comprehensive evaluation of the performance of the administrative pilot program and provide greater transparency when reporting to Congress. Some program managers told us that they planned to spend more money on the administrative pilot program in fiscal years 2014 and 2015, but there is concern about the future of the pilot beyond fiscal year 2015. Specifically, officials at 7 of the 11 agencies that participated in the SBIR and STTR programs told us that they would prefer that the administrative pilot program were either extended or made permanent. Some officials said that the program should be extended to give SBA enough time to track whether pilot-funded initiatives were successful, while other officials were concerned about hiring new staff when the funding for the positions might not be available after fiscal year 2015. Federal agencies have awarded billions of dollars to small businesses under the SBIR and STTR programs to develop and commercialize innovative technologies. In our previous reports on these issues, we identified some areas where SBA could take actions to better ensure agencies’ compliance with spending and reporting requirements. For example, in our last report, we recommended that SBA clarify in the SBIR and STTR policy directives that agencies are supposed to spend the required amount each year, rather than reserving funds for future years to meet spending requirements. We also recommended that SBA request that agencies submit their methodology reports within 4 months of appropriations, as required by law. SBA officials said that they plan to take actions to address both of these recommendations but had not done so before agencies submitted their fiscal year 2013 data, and we found that these issues continue. Thus, we continue to believe the recommendations have merit and should be fully implemented. In addition to issues that could be addressed by implementing our prior recommendations, we identified three issues that—if left unaddressed— could affect compliance with spending and reporting requirements in the future. First, most agencies continued to provide SBA with data on their extramural R&D budgets, rather than the amounts they actually obligated for extramural R&D. In doing so, agencies have limited SBA’s ability to accurately assess whether they complied with program spending requirements. Without information on how much agencies obligated for extramural R&D, SBA cannot accurately determine and report to Congress on compliance with spending requirements. Program managers told us that they face challenges in submitting obligations data to SBA. For example, they cited difficulties in calculating their actual extramural R&D obligations and challenges in complying with spending requirements based on a figure that is not known until after the end of the fiscal year— when it is too late to obligate additional funds to comply with the requirements. Nevertheless, the Small Business Act defines the annual spending requirements as a percentage of their extramural R&D obligations. Without SBA notifying Congress or developing a proposal for Congress to change the requirement, agencies are likely to continue to face challenges in submitting the correct data to SBA and in complying with the law. Second, SBA did not include an analysis of agencies’ methodology reports in its most recent report to Congress for fiscal year 2012, and SBA officials told us it was not clear from the methodology reports how agencies were calculating their extramural R&D. Moreover, some program managers told us that the purpose of the methodology reports is not clear. The law requires SBA to include an analysis of the agencies’ methodology reports to Congress. However, each of the agencies submits methodology reports of varying detail, with some providing limited information on how they calculated their extramural R&D budgets, making it difficult for SBA to analyze the agencies’ methodology reports. Without assessing whether the information it collects is adequate to analyze agencies’ methodology reports, SBA cannot ensure that it is providing Congress with an accurate analysis of how agencies calculate their extramural R&D. Third, the administrative pilot program provides agencies with an opportunity to expand their oversight and administration of the programs. In fiscal year 2013, SBA requested the total amount that agencies spent on the administrative pilot program from the agencies but did not require agencies to submit information on how they used the funds. Without this additional information, SBA cannot undertake a comprehensive evaluation of the performance of the administrative pilot program allowing it to provide greater transparency on the program to Congress. To ensure full compliance with SBIR and STTR spending and reporting requirements, we recommend that the SBA Administrator take the following three actions: Notify Congress in SBA’s annual report if it cannot determine agency compliance with program spending requirements when agencies that participate in the SBIR and/or STTR programs do not report extramural R&D obligations data, or develop a proposal to Congress that would change the requirement. Assess the methodology reporting requirement to determine whether it generates adequate information for SBA to analyze the accuracy of agencies’ calculations of their extramural R&D. If SBA finds that the information is inadequate, SBA should update its guidance to require adequate information. Provide greater transparency for the administrative pilot program by requiring participating agencies to provide data on the use of the funds, rather than a total cost for all of the activities under the pilot. We provided a draft of this report to SBA and the 11 participating agencies for review and comment. In an e-mail response, SBA agreed with our recommendations. SBA also provided technical comments, as did DHS and HHS, which we incorporated as appropriate. Seven of the agencies—Commerce, DOE, DOT, Education, EPA, NASA, and NSF had no technical or written comments. The remaining agencies—DOD and USDA—provided written comments, which were reproduced in appendixes III and IV, respectively. In its written comments, the DOD Acting Director for the Office of Small Business programs raised two issues. First, DOD recommended that we remove the assertion that the extramural budget is defined as actual obligations over the course of the year. DOD stated that the spending requirement is calculated within 4 months following the enactment of annual appropriations; therefore the spending requirement must be based on the planned extramural R&D budget, which has an obligation period of 2 years, and not over the course of the year as defined in this report. DOD is not required to calculate its spending requirement in the first four months after it receives its appropriation, but only required to submit its methodology for calculating its spending requirement. In addition, we acknowledge that DOD generally has two years to obligate its R&D funds, but agencies are specifically required to spend it in each fiscal year. Moreover, the Small Business Act defines extramural R&D budget in terms of obligations. Nothing in the act indicates that “obligations” should be construed as “planned obligations.” Therefore, we continue to believe that the extramural R&D budget is defined as actual obligations over the course of the year. In their second point, DOD recommends submitting the extramural budget calculated in the methodology report, rather than actual extramural obligations, to SBA in the annual report. We recognize that using extramural R&D obligations makes it difficult for agencies to comply with spending requirements and we recommend in this report that SBA notify Congress if it cannot determine compliance with spending requirements when agencies do not report extramural R&D obligations data, or develop a proposal to Congress that would change the requirement. In its written comments, USDA’s Director of the National Institute of Food and Agriculture stated that USDA generally agrees with our report. The official stated that because total obligations for extramural R&D are not known until the end of the fiscal year, it makes it difficult for the agency to ensure that funding targets for SBIR are met every year. USDA agreed with our recommendation that SBA should submit a proposal to Congress to change the requirement. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Commerce, Defense, Education, Energy, Homeland Security, Health and Human Services, and Transportation; the Administrators of the Small Business Administration, the Environmental Protection Agency, and the National Aeronautics and Space Administration; the Director of the National Science Foundation; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. The data that the agencies submitted to the Small Business Administration (SBA) indicate that 9 of the 11 participating agencies spent amounts for the Small Business Innovation Research (SBIR) program that met or exceeded their fiscal year 2013 spending requirements, while spending for the remaining 2 agencies did not meet the requirements. (See table 4.) The data that the agencies submitted to the Small Business Administration (SBA) indicate that four of the five participating agencies spent amounts for the Small Business Technology Transfer (STTR) program that met or exceeded their fiscal year 2013 spending requirements, while one agency did not. (See table 5.) In addition to the individual named above, Hilary Benedict, Assistant Director; Jeffrey Barron; Andrew Burton; Antoinette Capaccio; Cindy Gilbert; Marya Link; Perry Lusk; Cynthia Norris; and Dan Royer made key contributions to this report. | Federal agencies have awarded more than 156,000 contracts and grants, totaling nearly $40 billion through the SBIR and STTR programs to small businesses to develop and commercialize innovative technologies. The Small Business Act requires agencies with extramural R&D obligations that meet certain thresholds for participation—$100 million for SBIR and $1 billion for STTR—to spend a percentage of these funds on the programs. The agencies are to report on their activities to SBA and, in turn, SBA is to report to Congress. The 2011 reauthorization of the programs mandated GAO to review compliance with spending and reporting requirements, as well as other program aspects. This report examines, for fiscal year 2013, (1) the extent to which agencies complied with spending requirements, (2) the extent to which agencies and SBA complied with certain reporting requirements, (3) the potential effects of basing spending requirements on total R&D budgets, and (4) what is known about the amounts spent on administering the programs. GAO reviewed agency spending data and required reports for fiscal year 2013 and interviewed program officials from SBA and the participating agencies. The Small Business Administration's (SBA) ability to fully determine compliance with spending requirements for the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs for fiscal year 2013 is limited because most agencies submitted incorrect data. Nevertheless, analyzing agency data submitted to SBA suggests that 9 of the 11 agencies participating in the SBIR program and 4 of the 5 agencies participating in the STTR program complied with spending requirements for fiscal year 2013. Specifically, agencies are required to submit the actual amount obligated for extramural research or research and development (R&D)—which is generally conducted by nonfederal employees outside of federal facilities—and these obligations are the basis for calculating the agencies' spending requirements. However, most agencies submitted budget data instead. Program managers raised concerns about the difficulties in determining the amount of extramural R&D obligations and the challenges in using this amount to calculate spending requirements, as extramural R&D obligations are not known until after the end of the fiscal year. However, without the required data, SBA cannot accurately report on agencies' compliance with spending requirements—as defined in the law—to Congress. Some agencies did not comply with certain methodology reporting requirements for the programs. For example, 3 of the 11 participating agencies did not itemize the specific programs they excluded from their extramural R&D in their required methodology reports, or did not explain the reasons why they excluded the programs, or both. GAO also found that SBA did not assess whether the information it collected was adequate to appropriately analyze agencies' methodology reports. Without such an assessment, SBA cannot provide Congress with an accurate analysis of how agencies calculate their extramural R&D. Furthermore, SBA has not issued its required report to Congress on the programs for fiscal year 2013. Basing the programs' spending requirements on total R&D instead of extramural R&D could increase the amount of each agency's spending requirement and increase the number of agencies required to participate. Some agency officials said that basing the calculation methodology on their total R&D budget would make administering their programs easier, but officials at other agencies said that the change could result in reduced funding for intramural research and extramural research outside of the SBIR and STTR programs. Little is known about total administrative spending on the programs for fiscal year 2013 because the agencies that participate are not required to and do not fully track these costs. Six agencies participated in an administrative pilot program that allowed them to spend program funds on new administrative and oversight activities in fiscal year 2013. These agencies reported spending $12.3 million on these activities, but this amount does not represent total administrative spending. Additionally, this is about 20 percent of what the agencies had planned to spend on the administrative pilot program at the beginning of the fiscal year. Program managers at seven agencies told GAO that they would prefer that the administrative pilot program were either extended or made permanent. GAO recommends, among other things, that SBA notify Congress if it cannot determine agency compliance with spending requirements and assess the adequacy of the methodology reporting requirement. SBA generally agreed with GAO's findings and recommendations. |
Small employers with low-wage employees do not commonly offer health insurance, compared with large employers with low-wage employees, as shown in figure 1. A combination of factors explains why small, low-wage employers tend not to offer health insurance. For very low-wage employees, such as minimum wage employees, health insurance drives up total compensation costs for employees. Low-wage employees working for small employers generally prefer to receive wages over insurance benefits as part of total compensation. On one hand, while employees pay both income and employment tax on wages, employees do not have to pay income or employment taxes on premiums paid by their employers for health insurance. However, for low-wage employees, the income tax exclusion is worth less relative to cash wages than for higher-income employees because low-wage employees may be in a lower income tax bracket. In general, the federal minimum wage is $7.25 per hour. Many states also have minimum wage laws and minimum wages vary from state to state. are likely to have higher premiums or have less coverage and higher out-of-pocket costs than plans offered to large employers. IRS’s Small Business and Self-Employed Division (SB/SE) and Tax Exempt and Government Entities Division (TEGE) are primarily responsible for implementing the credit. IRS works with the Department of Health and Human Services (HHS) and the Small Business Administration (SBA) on implementation tasks, such as outreach and communication. To be eligible, an employer must: Be a small business or tax-exempt employer located in or having trade or business income in the United States and pay premiums for employee health insurance coverage issued in the United States. Employ fewer than 25 full-time-equivalent (FTE) employees in the tax year (excluding certain employees, such as business owners and their family members). Pay average annual wages of less than $50,000 per FTE in the tax year. Offer health insurance and pay at least 50 percent of the health insurance premium under a “qualifying arrangement.” This means that the employer uniformly pays at least 50 percent of the cost of premiums for enrolled employees, although IRS did develop relaxed criteria for meeting this requirement for tax year 2010. The President’s fiscal year 2013 budget request contains a proposal for expanding the credit’s eligibility criteria to include employers with 50 or fewer FTEs and removing the uniform contribution requirement. The amount of the credit that employers can claim depends on several factors. Through 2013, small businesses can receive up to 35 percent and tax-exempt entities can receive up to 25 percent of their base payments for employee health insurance premiums; these portions rise to 50 percent and 35 percent, respectively, starting in 2014. Employers can receive the full credit percentage if they have 10 or fewer FTEs and pay an average of $25,000 or less in annual wages; employers with 11 to 25 FTEs and average wages exceeding $25,000 up to $50,000 are eligible for a partial credit that “phases” out to zero percent of premium payments as the FTE and wage amounts rise. Figure 2 shows the phaseout of the credit for small businesses; the phaseout for tax-exempt entities follows a similar pattern, up to 25 percent of health insurance premiums. Further, the amount of the credit is limited if the premiums paid by an employer are more than the average premiums determined by HHS for the small group market in the state in which the employer offers insurance. The credit percentage is multiplied by the allowable premium to calculate the dollar amount of credit claimed. For example, in Alabama, the state average premium was $4,441 for a single employee in 2010. If an employer claiming the credit in Alabama paid $5,000 for a single employee’s health premium, the credit would be calculated using the state average premium of $4,441 rather than the actual premium paid. Appendix II shows the average premiums by state. The proposal in the President’s Budget suggests beginning the phaseout at 21 FTEs, rather than 11, as well as providing for a more gradual combined phaseout for the credit percentages and removing the state market limits. Employers are to calculate the credit amount on IRS Form 8941, “Credit for Small Employer Health Insurance Premiums.” Small businesses are to claim the credit as part of the general business tax credit (on Form 3800), and use it to offset actual tax liability. If they do not have a federal tax liability, they cannot receive the credit as a refund but may carry the credit forward or back to offset tax liabilities for other years. Credit amounts claimed by partnerships and S corporations are to be passed through to their partners and shareholders, respectively, who may claim their portions of the credit on their individual income tax returns. Tax-exempt entities are to claim the credit on Form 990-T, “Exempt Organization Business Income Tax Return,” and receive the credit as a refund even though the employer has no taxable income. Employers that claim the credit can also deduct health insurance expenses on their tax returns but must subtract the amount of the credit from the deduction. Employers can claim the credit for up to 6 years—the initial 4 years from 2010 through 2013 and any 2 consecutive years after 2013 if they buy insurance through the Small Business Health Option Programs, which are part of the insurance exchanges to be established under PPACA. Fewer small employers claimed the credit for tax year 2010 than were thought to be eligible based on rough estimates of eligible employers made by government agencies and small business groups. IRS data on total claimants, adjusted to account for claims by partners and shareholders, show that about 170,300 small employers made claims for the credit in 2010. (See app. III for adjustments to determine claims filed by employers.) The average credit amount claimed was about $2,700. Limited information is available on the distribution of claim amounts for business entities because IRS focuses its data collection on the taxpayers filing credit claims, who may be partners or shareholders claiming their portions of a business entity’s credit. Appendix III provides additional detail. Selected estimates, made by government agencies and small business groups, of employers eligible for the credit range from around 1.4 million to 4 million. However, data limitations mean that these estimates are necessarily rough. Based on our review of available data sources on the three basic eligibility rules for the credit—involving wages, FTEs, and health insurance—it is not possible to combine data from various sources to closely match these rules. (See app. VI for details.) Though statistical modeling corrects for imperfect data to match these rules, models are not precise. While acknowledging the data limitations, several entities produced estimates of the number of employers potentially eligible for the credit. The Council of Economic Advisors estimated 4 million and SBA estimated 2.6 million. Other groups making estimates included small business groups such as the Small Business Majority (SBM) and the National Federation of Independent Businesses (NFIB). Their estimates were 4 million and 1.4 million, respectively. A similar pattern is seen when the dollar value of credits actually claimed is compared to initial estimates. The dollar value of claims made in 2010 was $468 million compared to initial cost estimates of $2 billion for 2010 (a CBO and JCT joint estimate). Most of the claims were for less than the full credit percentage. Of the approximately 170,300 small employers making claims for tax year 2010, 142,200—83 percent—could not use the full credit percentage. Usually employers could not meet the average wage requirement to claim the full percentage, as about 68 percent did not qualify based on wages but did meet the FTE requirement. (See fig. 3.) State average premiums also reduced some credit amounts by reducing the amount of the premium base against which the credit percentage is applied. This premium base may be reduced when it exceeds the state average premiums for small group plans, as determined by HHS. If so, small employers are to use the state average amount, which in essence caps the premium amount used to calculate their credit. According to IRS data, this cap reduced the credit for around 30 percent of employer claims. For example, a nonprofit representative told us that her credit dropped from $7,900 to $3,070 because of the cap in her state. (See app. II for small group average premiums in all states.) As already discussed, small employers do not commonly offer health insurance. MEPS estimates that 83 percent of employers who may otherwise be eligible for the full credit did not offer health insurance in 2010 and that 67 percent of employers who could be eligible for the partial credit did not offer insurance. Our discussion groups and other interviewees confirmed this, with comments and examples of small, low- wage employers not offering health insurance to employees. Furthermore, the small employers do not likely view the credit as a big enough incentive to begin offering health insurance and to make a credit claim, according to employer representatives, tax preparers, and insurance brokers we met with. While some small employers could be eligible for the credit if they began to offer health insurance, small business group representatives and discussion group participants told us that the credit may not offset costs enough to justify a new outlay for health insurance premiums. Related to this concern, the credit being available for 6 years overall and just 2 consecutive years after 2014 further detracts from any potential incentive to small employers to start offering health insurance in order to claim the credit. Most discussion group participants and groups we interviewed found the tax credit to be complicated, deterring small employers from claiming it. The complexity arises from the various eligibility requirements, the various data that must be recorded and collected, and number of worksheets to be completed. A major complaint we heard centered on gathering information for and calculating FTEs and the health insurance premiums associated with those FTEs. Eligible employers reportedly did not have the number of hours worked for each employee readily available to calculate FTEs and their associated average annual wages nor did they have the required health insurance information for each employee readily available. Exclusions from the definition of “employee” and other rules make the calculations complex. For example, seasonal employees are excluded from FTE counts but insurance premiums paid on their behalf count toward the employer’s credit. Incorporating the phaseout also complicates the credit calculation. In our discussion groups with tax preparers, we heard that small business owners generally do not want to spend the time or money to gather the necessary information to calculate the credit, given that the credit will likely be insubstantial. Tax preparers told us it could take their clients from 2 to 8 hours or possibly longer to gather the necessary information to calculate the credit and that the tax preparers spent, in general, 3 to 5 hours calculating the credit. We did hear from a couple of participants— a small business owner and a nonprofit representative—that they did not find the credit overly burdensome. Tax preparers we interviewed said that IRS did the best it could with the Form 8941 given the credit’s complexity. IRS officials said they did not receive criticism about Form 8941 itself but did hear that the instructions and its seven worksheets were too long and cumbersome for some claimants and tax preparers. On its website, IRS tried to reduce the burden on taxpayers by offering “3 Simple Steps” as a screening tool to help taxpayers determine whether they might be eligible for the credit. However, to calculate the actual dollars that can be claimed, the three steps become 15 calculations, 11 of which are based on seven worksheets, some of which request multiple columns of information. Figure 4 aligns IRS’s “3 Simple Steps,” with the seven worksheets in the instructions for Form 8941 and the lines on Form 8941. (See app. V for full text for this figure.) Given the effort involved to make a claim and the uncertainty about the credit amounts, a few discussion group participants said it would be helpful to be able to quickly estimate employers’ eligibility for the credit and the amount they might receive; this would help them to decide whether the credit would be worth the effort, although this would not reduce the complication of filing out Form 8941 because, to fill out the form, full documentation would need to be reviewed. IRS’s Taxpayer Advocate Service is developing a calculator for IRS’s website to quickly estimate an employer’s eligibility, but this will still require gathering information such as wages, FTEs, and insurance plans. We also heard concerns that a calculator could cause confusion for clients who find they are eligible when quickly estimating the credit but then turn out to be ineligible or find they are eligible for a smaller credit when their accountant fills out Form 8941. Many small businesses reported that they were unaware of the credit. The NFIB Research Foundation and the Kaiser Family Foundation both estimated that approximately 50 percent of small businesses were aware of the credit, as of May 2011, or more than 1 year after Congress authorized this credit. The extent to which being unaware prevented eligible employers from claiming the credit for tax year 2010 is not known. Some discussion group participants raised concerns about unawareness, but they also cited other factors limiting credit claims for tax year 2010. If 50 percent of small businesses knew about the credit, then the approximately 170,300 claims is a relatively small proportion of those that were knowledgeable. This indicates that other factors contributed to employers not claiming the credit. Further, it is hard to interpret the impact of awareness on claims because these surveys included an unknown number of small business employers that would not be eligible for the credit regardless of their awareness. For those employers that were unaware, the surveys did not account for their accountants or tax preparers that may have known about the credit but did not tell their clients about it because they did not believe their clients would qualify or because the credit amount would be very small. In addition, the surveys did not cover tax-exempt entities. To raise awareness of the credit, IRS did significant outreach. IRS developed a communication and outreach plan, written materials on the credit, a video, and a website. IRS officials also reached out to interest groups about the credit and developed a list of target audiences and presentation topics. IRS officials began speaking at events in April 2010 to discuss the credit and attended over 1,500 in-person or web-based events from April 2010 to February 2012. Discussion of the credit at the events varied from being a portion of a presentation covering many topics to some events that focused on the credit with a dedicated discussion period. IRS does not know whether its outreach efforts actually increased awareness of the credit or were otherwise cost-effective. It would be challenging to estimate the impact of IRS’s outreach efforts on awareness with a rigorous methodology; however, based on ongoing feedback they received from interest groups, IRS officials told us they believe their efforts have been worthwhile. IRS used some feedback from focus groups of tax preparers and from other sources to revise its outreach efforts. For example, IRS modified its outreach from initially focusing on tax preparers and small employers to including insurance brokers in 2012. Given that most small employers do not offer insurance and what we heard about the size of the credit not being big enough to incentivize offering health insurance,credit use without changing the credit’s eligibility. Most claims were for partial credits and many people we spoke with view the credit amount as it may not be possible to significantly expand too small and temporary to justify providing health insurance when none is provided now. In addition, given that IRS has conducted extensive outreach about the credit, it is not likely that more outreach would significantly increase the number of businesses claiming the credit. Amending the eligibility requirements or increasing the amount of the credit may allow more businesses to take advantage of the credit, but these changes would increase its cost to the Federal government. Options include the following: Increasing the amount of the full credit, the partial credit, or both. Increasing the amount of the credit for some by eliminating state premium averages. Expanding eligibility requirements by increasing the number of FTEs and wage limit allowable for employers to claim the partial credit, the full credit, or both. This expansion would not, however, likely affect the smallest employers which do not offer health insurance. Simplifying the calculation of the credit in the following ways: Using the number of employees and wage information already reported on the employer’s tax return. This could reduce the amount of data gathering as well as credit calculations because eligibility would be based on the number of employees and not FTEs. A trade-off with this option would be less precision in targeting the full and partial credit amount to specific small employer subgroups. Offering a flat credit amount per FTE (or number of employees) rather than a percentage, which would reduce the precision in targeting the credit. The data limitations that made it difficult to estimate the number of businesses eligible for the current credit also make it difficult to estimate the impact of any design changes. IRS’s compliance efforts for the credit incorporate practices that have been shown effective in helping to ensure compliance with other tax provisions or are consistent with IRS strategic objectives. Some of those practices were used for the Telephone Excise Tax Refund (TETR) and Consolidated Omnibus Budget Reconciliation Act (COBRA) subsidies for health insurance for the unemployed, according to IRS officials.Specifically, IRS is doing the following: Using computerized filters to review credit claims on Forms 8941 for certain errors or potential problems that may trigger an examination of the claim. Transcribing more lines of data from Form 8941 into IRS computer systems which should make the filters more effective. Although transcribing more lines increases processing and data storage costs, IRS plans to transcribe more lines for tax years 2011 and 2012 claims to ensure better verification of eligibility. Freezing refunds of tax-exempt entities whose returns have been selected for examination, which avoids the costs of trying to recover funds. Considering the documentation burden on claimants. IRS did not require claimants to submit documentation on health insurance premiums with their Form 8941 because IRS officials said they will review examination results and may revisit the decision not to require documentation if results suggest that such documentation would improve compliance checks. Modifying filters, as needed, in response to observed trends. For example, a filter that applies to tax-exempt organization claims was tripped by about a quarter of claimant organizations, as of December 31, 2011. IRS officials said some eligible tax-exempt entities tripped the filter because it was too broad. To address this, IRS modified the filter to more clearly identify qualifying tax-exempt organizations. Completing a risk assessment on compliance issues related to the credit. The assessment identified risks involving refunds for tax- exempt entities, difficulties verifying employment tax return information for certain employers, and not using existing Math Error Authority (MEA). Considering the costs and benefits of MEA for the credit. IRS officials identified three filters whose type of errors could be addressed with MEA. They noted that less than 1 percent of Forms 8941 tripped one or more of those filters, which IRS officials said does not justify the costs to develop procedures to use MEA, if it were granted. IRS developed 21 filters for Form 8941, some of which apply differently to SB/SE and TEGE taxpayers. The filters cover some of the eligibility requirements for the credit. Errors on about 3.5 percent (11,763) of Forms 8941 for tax year 2010 tripped 1 or more filters; almost half of those forms were from tax-exempt entities. According to IRS officials, the filter failure rate is consistent with other recent tax credits. The filters do not cover all of the credit’s requirements for several data- related reasons. In one case, data are not included on Form 8941 but may be included on worksheets required to be retained by claimants (e.g., information on business owner family members or seasonal employees included in credit calculations); in another case, certain data are not transcribed (e.g., the credit amount for certain claimants). For other requirements, IRS officials stated that reasonable filters cannot easily be developed because of challenges with matching data. Some Form 8941 filters also face limitations mainly because of problems with data or IRS’s systems. Filters are mutually exclusive, meaning that filters on related requirements are viewed in isolation. However, according to IRS officials, IRS has ways to identify whether a form failed more than one filter, which IRS considers when identifying returns for potential examination. Some filters may mistakenly target eligible claimants because the filters rely on general thresholds in Form 8941 data or, in some cases, other IRS data (such as employee-level data) that are not exact matches to data on the Form 8941. Data on Forms W-2 (employees’ annual Wage and Tax Statement) could provide additional data for filters once the provision in PPACA is implemented that requires employers to report the cost—including both employer and employee contributions—of certain types of health insurance provided to an employee. IRS officials said the data have limited use because, among other things, they would not provide details for determining whether an employer met the credit’s requirements for health insurance; therefore, IRS officials will not pursue using the data at this time. Nevertheless, the data could be used in a filter to identify claimants who reported no health insurance contributions on Form W-2 and therefore may not be offering health insurance. In the absence of other documentation or third-party reporting on health insurance, using Form W-2 data in a filter could be a cost-effective, rough indicator of whether a claimant is paying employee health insurance premiums, without increasing taxpayer burden. However, IRS provided transition relief to employers that file fewer than 250 Forms W-2 per year, and issued guidance stating that these employers will not be required to report the data until further guidance is issued. As a result, it is unlikely that the data could be useful before 2014, the year when the credit will only be available to employers for any 2 consecutive years. After the filters are run, IRS creates lists of claims to consider for further examination. SB/SE wanted enough examination cases to spot check different filters and claims from different regions, to enable them to establish a field presence and to learn about compliance risks with the credit, according to an SB/SE official. Examination staff in SB/SE and TEGE are to follow a set of instructions when doing examinations. SB/SE’s examination instructions address all of the credit’s requirements for small businesses to claim the credit except that they do not include specific instructions for examiners on determining eligibility of claimants with non-U.S. addresses. An employer located outside of the United States with a business or trade interest in the United States may claim the credit only if the employer pays premiums for coverage issued in and regulated by one of the states or the District of Columbia. Without a prompt in examination instructions, IRS examiners may overlook claimants that do not comply with the address requirements. An SB/SE official said IRS has no instructions for examiners to review claimants with non-U.S. addresses during an examination on the credit because potential compliance problems with businesses with non-U.S. addresses exist for other tax credits. This, however, was not IRS’s approach for another general business tax issue relevant to the credit—whether claimants that carry back the credit to offset tax liabilities in previous years did so properly. Near the end of our work, SB/SE added guidance to one of its examination instruction documents to cover the carry back issue. Instructions for TEGE examiners also address most of the eligibility requirements to claim the credits, but, like SB/SE’s, TEGE examination instructions do not address how to review claimants with non-U.S. addresses. Further, TEGE instructions for some of the credit’s requirements have less detail compared to SB/SE’s instructions. TEGE’s instructions provide steps on how to determine if an employer’s insurance premiums paid met “qualifying arrangement” and other criteria, but they provide less detail than SB/SE instructions. For example, SB/SE guidance instructs examiners to review health insurance policies and invoices to confirm premium payments, and to review other documentation to check whether the employer offers health benefits that are not eligible for the credit. TEGE instructions do not suggest these steps and also do not provide a prompt for examiners to ensure that insurance premiums paid on behalf of seasonal employees are included in calculations. According to IRS officials, the TEGE examiners are trained specifically for doing examinations on the credit and therefore need less guidance than SB/SE examiners, who work on multiple issues simultaneously. However, TEGE examination documents contain detailed guidance in a workbook format for these trained examiners on other credit requirements. Without detailed guidance for TEGE examiners that instructs them on how to examine health insurance documents, examiners may not consistently identify noncompliance, which could lead to erroneous credit refunds. This could particularly be the case as examining health insurance documents to check eligibility for this new credit has not been typical work for these examiners. For tax year 2010, SB/SE plans to conduct over 1,500 examinations related to the credit, and TEGE anticipates about 1,000 examinations. An SB/SE official said the number of examinations is expected to provide initial compliance information and allow IRS to establish a compliance presence without committing too many resources initially. TEGE selected its number of examinations based on resource decisions, before tax year 2010 claims began. Neither SB/SE nor TEGE adjusted the number of examinations once actual claim numbers were known. As a result, the percentage of TEGE claims being examined is high, according to a TEGE official. Table 1 summarizes the status of IRS’s examinations on the credit. IRS’s database on examination results tracks the aggregate dollar amount of tax changes as a result of the examination but does not contain the reason a change is made. Consequently, IRS is not able to isolate and analyze examination results related to the credit versus other tax issues. This is particularly a problem for SB/SE examinations, which Instead, as initial examinations may cover issues other than the credit.have closed, IRS officials said that management has spoken with examiners about findings related to the credit. This has been possible because of the relatively low initial volume of cases, but this approach may not be feasible as results accumulate. Therefore, it is not clear how IRS can efficiently analyze results to decide whether changes are necessary in how it examines the credit or how it educates small employers about how to comply with the credit’s rules, and whether it committed too many or too few resources to examinations of the credit. Furthermore, IRS does not have criteria for deciding whether the resources spent on examinations of the credit are appropriate, given the amount of errors found. IRS officials said that for future years they plan to select the number of credit examinations based on past results, identified compliance risks, and available resources. However, without criteria to assess the results in concert with these risks and resources, IRS is less able to ensure that examination resources target errors with the credit, rather than examining compliant claimants. For example, early examination results (as of February 2012) show that 67 percent of the examinations completed were closed without changing the credit amount. Examinations without a change burden taxpayers and use IRS resources. We recognize that few of the planned examinations have been completed and the “no change” percentage could change. According to IRS officials, cases resulting in “no change” tend to be the first cases closed because they close more quickly than cases requiring a change. However, IRS is not using change rate information from prior tax credits to determine if examinations for the credit have a “high” no-change rate, which could be one indicator to help decide how many examination resources to apply to the credit. IRS officials said they do not plan to use data from examinations of other tax provisions to benchmark measures— such as the no-change rate or length of time an examination is open— because results would not be comparable. A summary of examination results specific to the credit could also inform decisions about using additional compliance tools such as soft notices.In the past, IRS has used soft notices to correct errors and collect funds without initiating an examination.implementing the credit said IRS has not ruled out using soft notices, but examination results would need to identify an issue that would justify their use. He said soft notices are not effective for all taxpayers or situations. He said IRS would consider using soft notices if officials found a series of returns with mistakes from the same tax preparer or promoter of tax schemes. Furthermore, soft notices may necessitate follow-up, which would negate some of the advantages of the notices. If IRS analysis showed that examinations were not a cost-effective way to pursue certain errors made in claiming a credit, a soft notice may offer another approach to improving compliance with lower costs to IRS and less burden on claimants. There are a variety of research questions that could be of interest to policymakers about the effects of the credit that cannot be evaluated with data currently available. Figure 5 shows how the credit may influence employer behavior and, ultimately, employees. To answer research questions about the credits potential outcomes shown in figure 5, the following are examples of data that might be needed: number of small, low-wage employers offering health insurance, before and after the credit was available; number of employees at small, low-wage employers, who have or could obtain health insurance through their employers; and amount of annual health insurance premium costs for small, low-wage employers before and after the credit. None of these data are readily available or free of limitations, which complicates an evaluation. For example, the available data on employer- sponsored health insurance do not align with the credit’s eligibility criteria, according to our interviews with subject matter specialists and our review of the data (see app. VI for a summary of the data sources), nor could we identify a data source that tracks when, and why, employers begin offering insurance. As a result of the limitations with all three types of data, it would be difficult to precisely measure changes in health insurance availability, offering, and costs because of the credit, without collecting additional data. Isolating influential factors—such as those shown in figure 5—that may contribute to the effects of the credit would also be a challenge in an evaluation. IRS officials said they will not collect data on credit claimants, outside of those collected on Form 8941. IRS’s position on data collection for all provisions of the tax code is that it only collects data it needs to ensure compliance with the tax laws. Collecting additional data needed for policy evaluation would have costs, and the magnitude of those costs would depend on the type and amount of data needed, which depends on the research questions being asked. An additional consideration in thinking about the benefits and costs of additional data collection for policy evaluation purposes is the time limits on claiming the credit. The current version of the credit runs through the end of 2014. Policymakers’ conclusions about the questions to be answered by any evaluations of the credit’s effects would determine the type of data that would need to be collected. The Small Employer Health Insurance Tax Credit was intended to offer an incentive for small, low-wage employers to provide health insurance. However, utilization of the credit has been lower than expected, with the available evidence suggesting that the design of the credit is a large part of the reason why. While the credit could be redesigned, such changes come with trade-offs. Changing the credit to expand eligibility or make it more generous would increase the revenue loss to the federal government. In administering the credit to ensure compliance, IRS employed a number of practices that were shown effective for other tax provisions or are consistent with IRS strategic objectives. Nevertheless, we identified several opportunities for IRS to either improve compliance or perhaps reduce the resources it is devoting to ensuring compliance. Without additional guidance for examiners on employers with non-U.S. addresses, there is a risk of improper credit claims being allowed. Without more systematic attention to early examination results, IRS could lock itself into devoting more scarce resources than needed to examinations. To help ensure thoroughness and consistency of examinations on the credit, we recommend that the Commissioner of Internal Revenue take the following two actions: 1. Revise the SB/SE and TEGE examination instructions to include instructions for examiners on how to confirm eligibility for the credit for small employers with non-U.S. addresses. 2. Revise the TEGE examination guidance to include more detailed instructions for examiners on how to confirm that claimants properly calculated eligible health insurance premiums paid for purposes of the credit. The SB/SE examination instructions could serve as a model. To help ensure that IRS uses its examination resources efficiently, we recommend that the Commissioner of Internal Revenue take the following two actions: 3. Document and analyze the results of examinations involving the credit to identify how much of those results are related to the credit versus other tax issues being examined, what errors are being made in claiming the credit, and when the examinations of the credit are worth the resource investment. 4. Related to the above analysis of examination results on the credit, identify the types of errors with the credit that could be addressed with alternative approaches, such as soft notices. In an April 30, 2012, letter responding to a draft of this report (which is reprinted in app. VII), the IRS Deputy Commissioner for Services and Enforcement provided comments on our findings and recommendations as well as information on additional agency efforts related to implementing the Small Employer Health Insurance Tax Credit in PPACA. IRS generally agreed with all four of our recommendations. Regarding our recommendation on examination instructions related to small employers with non-U.S. addresses, IRS stated that SB/SE will provide additional guidance in its instructions and that TEGE has added guidance to its instructions. On May 1, 2012, IRS provided a copy of the TEGE instructions, which we are reviewing. On our recommendation on revising TEGE’s examination guidance, IRS’s letter said that on April 13, 2012, TEGE implemented more detailed instructions in its examination guidance related to confirming proper calculations of eligible health insurance premiums paid for purposes of the credit. These instructions were also included in the TEGE document provided on May 1, 2012. With regard to analyzing credit examination results to identify compliance issues specific to the credit, IRS said it regularly analyzes audit results to determine whether resources are expended efficiently, though its information systems do not currently capture adjustments by issue, such as this tax credit. IRS agreed to leverage existing information systems and, as appropriate, to allocate resources to manually analyze examination results. IRS said this will include, as feasible, identifying the types and amounts of errors related to the credit. We reiterate the benefit of documenting and analyzing the results of examinations involving the credit. If it does not do so, IRS will not have information for determining whether examinations of the credit are worth the resource investment. Regarding our fourth recommendation on using examination results to determine whether alternative compliance approaches, such as soft notices, could help address errors with the credit, IRS agreed to continue to review its compliance efforts to determine whether soft notices would be appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, the Chairman of the IRS Oversight Board, and the Director of the Office of Management and Budget. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions or about this report, please contact me at (202) 512-9110 or at whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. To assess the extent to which the Small Employer Health Insurance Tax Credit (referred to in this report as the credit) is being claimed, we obtained and analyzed Internal Revenue Service (IRS) data on the claims on Form 8941 for tax year 2010. We interviewed responsible IRS staff and examined background materials. IRS provided a report from the Form 8941 data and we reviewed the programming code that created that report. We corroborated the results of this IRS report with a Treasury Inspector General for Tax Administration (TIGTA) report published in November and found similarities. The data were found to be sufficiently reliable for our purposes. We identified estimates of employers that were potentially eligible to claim the credit by reviewing reports and websites of government agencies, think tanks, and interest groups. When possible, we interviewed officials from the government agencies and business groups that developed estimates. To identify any factors limiting credit claims, we interviewed groups representing employers, tax preparers and insurance brokers and to assess how these factors could be addressed, we analyzed our interview results as well as relevant documents. Specifically, we spoke with representatives of the National Federation of Independent Businesses, the National Council of Nonprofits, the Small Business Majority, the U.S. Chamber of Commerce, the American Institute of Certified Public Accountants, America’s Health Insurance Plans, the National Society of Accountants, the National Association of Enrolled Agents, and the National Association of Health Underwriters. We worked with some of these groups to assemble discussion groups with tax preparers, health insurance brokers, and employers to discuss potential factors and ways to address them. Discussion groups were, for the most part, telephone conferences. We also spoke with insurance and tax preparation companies, specifically, BlueCross Blue Shield of Kansas City, Independent Health of New York, H&R Block’s Tax Institute, and Jackson Hewitt Tax Service. We used qualitative analysis software to do a content analysis of the interviews and discussion group comments. To provide additional support for discussion group and interview findings we reviewed documents and, where possible, we identified data from IRS, the 2010 Medical Expenditure Panel Survey, or the 2011 Kaiser Family Foundation Health Benefits Survey. At IRS, we interviewed officials from the Small Business/Self-Employed Division (SB/SE), including officials in the Communications and Liaison Office; the Tax Exempt and Government Entities Division (TEGE); the Research and Analysis for Tax Administration division, and the Taxpayer Advocacy Service. To assess how fully IRS is ensuring that the tax credit is correctly claimed by eligible employers, we reviewed IRS’s compliance plan and filters and instructions for IRS staff conducting examinations, and compared these documents with compliance practices used for prior tax provisions and found in IRS strategic objectives. We also highlighted any gaps between filters and examination instructions and the credit’s eligibility rules. We reviewed the filter results for tax year 2010 claims and interviewed SB/SE and TEGE officials about compliance efforts. To assess what would be needed to evaluate the effects of credit, we conducted a literature review and interviewed representatives of the forenamed groups and subject matter specialists from government, academia, research foundations and think tanks. We selected the specialists based primarily on our literature review and spoke with individuals at the University of Massachusetts, Boston; Massachusetts Institute of Technology; the Commonwealth Fund; the Urban Institute; the Kaiser Family Foundation; the American Enterprise Institute; the Employee Benefit Research Institute; the RAND Corporation; the Small Business Administration Office of Advocacy; and the Office of Tax Policy at the Department of the Treasury. We reviewed available data in commonly cited surveys with questions on employer health insurance, and identified how the questions and variables match to the eligibility criteria for the credit. We conducted this performance audit from July 2011 through May 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Small Employer Health Insurance Tax Credit is based on a percentage of the lesser of (1) the premiums paid by the eligible small employer for employees during the taxable year and (2) the amount of premiums the employer would have paid if each employee were enrolled in a plan with a premium equal to the average premium for the small group market in the state (or in an area in the state) in which the employer is offering health insurance. The Secretary of Health and Human Services determines whether separate average premiums will apply for areas within a state and also determines the average premium for a state or substate area. Table 2 shows the average premiums for the small group market in each state for tax years 2010 and 2011. Internal Revenue Service (IRS) data for tax year 2010 show 335,600 total claims filed. This total must be adjusted to avoid counting the 110,800 S corporation and partnership claims that were passed through to 165,300 respective shareholders and partners who then filed their claims separately. Excluding the 165,300 shareholder and partner claims filed leaves 170,300 small employer claims filed. To capture the number of credit amounts claimed and avoid the amounts that were claimed by the S corporations and partnerships as well as their respective shareholders and partners, we excluded the 110,800 S corporation and partnership claims to arrive at 224,800 credit amounts claimed. (See fig. 6.) This appendix contains the noninteractive Form 8941 and worksheets, shown in figure 4 in the letter. Through our literature review and interviews, we identified several commonly cited non-Internal Revenue Service data sources on employer health insurance. Each source has different variables related to the key eligibility requirements for the Small Employer Health Insurance Tax Credit. Table 3 summarizes each source, its basic methodology, and whether its data matches with these requirements for the credit. The table only considers data that are readily accessible in public-use data sets. In addition to the contact named above, Thomas Short, Assistant Director; Susan Baker; Amy Bowser; Ellen Grady; George Guttman; Donna Miller, Ruben Montes de Oca, Edward Nannenhorn; Robert Gebhart; Crystal Robinson; Cynthia Saunders; and Lindsay Swenson made key contributions to this report. | Many small employers do not offer health insurance. The Small Employer Health Insurance Tax Credit was established to help eligible small employersbusinesses or tax-exempt entitiesprovide health insurance for employees. The base of the credit is premiums paid or the average premium for an employers state if premiums paid were higher. In 2010, for small businesses, the credit was 35 percent of the base unless the business had more than 10 FTE employees or paid average annual wages over $25,000. GAO was asked to examine (1) the extent to which the credit is claimed and any factors that limit claims, including how they can be addressed; (2) how fully IRS is ensuring that the credit is correctly claimed; and (3) what data are needed to evaluate the effects of the credit. GAO compared IRS data on credit claims with estimates of eligible employers, interviewed various credit stakeholders and IRS officials as well as academicians on evaluation, compared IRS credit compliance documents with the rules and practices used for prior tax provisions and IRS strategic objectives, and reviewed literature and data. Fewer small employers claimed the Small Employer Health Insurance Tax Credit in tax year 2010 than were estimated to be eligible. While 170,300 small employers claimed it, estimates of the eligible pool by government agencies and small business advocacy groups ranged from 1.4 million to 4 million. The cost of credits claimed was $468 million. Most claims were limited to partial rather than full percentage credits (35 percent for small businesses) because of the average wage or full-time equivalent (FTE) requirements. 28,100 employers claimed the full credit percentage. In addition, 30 percent of claims had the base premium limited by the state premium average. One factor limiting the credits use is that most very small employers, 83 percent by one estimate, do not offer health insurance. According to employer representatives, tax preparers, and insurance brokers that GAO met with, the credit was not large enough to incentivize employers to begin offering insurance. Complex rules on FTEs and average wages also limited use. In addition, tax preparer groups GAO met with generally said the time needed to calculate the credit deterred claims. Options to address these factors, such as expanded eligibility requirements, have trade-offs, including less precise targeting of employers and higher costs to the Federal government. The Internal Revenue Service (IRS) incorporated practices used successfully for prior tax provisions and from IRS strategic objectives into its compliance efforts for the credit. However, the instructions provided to its examiners (1) do not address the credits eligibility requirements for employers with non-U.S. addresses and (2) have less detail for reviewing the eligibility of tax-exempt entities health insurance plans compared to those for reviewing small business plans. These omissions may cause examiners to overlook or inconsistently treat possible noncompliance. Further, IRS does not systematically analyze examination results to understand the types of errors and whether examinations are the best way to correct each type. As a result, IRS is less able to ensure that resources target errors with the credit rather than compliant claimants. Currently available data on health insurance that could be used to evaluate the effects of the credit do not match the credits eligibility requirements, such as information to convert data on number of employees to FTEs. Additional data that would need to be collected depend on the questions policymakers would want answered and the costs of collecting such data. GAO recommends that IRS (1) improve instructions to examiners working on cases on the credit and (2) analyze results from examinations of credit claimants and use those results to identify and address any errors through alternative approaches. IRS agreed with GAOs recommendations. |
A great deal of budget reporting focuses on a single number—the unified budget deficit, which was $248 billion in fiscal year 2006. This largely cash-based number represents the difference between revenues and outlays for the government as a whole. It is an important measure since it is indicative of the government’s draw on today’s credit markets—and its claim on today’s economy. But it also masks the difference between Social Security’s cash flows and those for the rest of the budget. Therefore we also need to look beneath the unified deficit at the on-budget deficit— what I like to call the “operating deficit.” And, finally, we should be looking at the financial statements’ report of net operating cost—the accrual-based deficit. Social Security currently takes in more tax revenue than it needs to pay benefits. This cash surplus is invested in Treasury securities and earns interest in the form of additional securities. The difference between the on-budget deficit and the unified budget deficit is the total surplus in Social Security (cash and interest) and the U.S. Postal Service. Excluding consideration of the $185 billion surplus in Social Security and a $1 billion surplus in the Postal Service, the on-budget deficit was $434 billion in 2006. Figure 2 shows graphically how the on-budget deficit and the off- budget surplus have related and combine to lead to the unified deficit. Since the Social Security trust fund invests any receipts not needed to pay benefits in Treasury securities, its cash surplus reduces the amount the Treasury must borrow from the public. As I will note later, this pattern of cash flows is important—and it is projected to come to an end just 10 years from now. The third number, net operating cost, is the amount by which costs exceed revenue and it is reported in the federal government’s financial statements, which are prepared using generally accepted accounting principles. Costs are recorded on an accrual basis—namely, in the period when goods are used or services are performed as opposed to when the resulting cash payments are made. However, most revenues, on the other hand, are recorded on the modified cash basis—that is, they are recorded when collected. The net operating cost can be thought of as the accrual deficit. The accrual measure primarily provides more information on the longer- term implications of today’s policy decisions and operations by showing certain costs incurred today but not payable for years to come, such as civilian and military pensions and retiree health care. In fiscal year 2006 net operating cost was $450 billion. All three of these numbers are informative. However, neither accrual nor cash measures alone provide a full picture of the government’s fiscal condition or the cost of government. Used together, they present complementary information and provide a more comprehensive picture of the government’s financial condition today and fiscal position over time. For example, the unified budget deficit provides information on borrowing needs and current cash flow. The accrual deficit provides information on the current cost of government, but it does not provide information on how much the government has to borrow in the current year to finance government activities. Also, while accrual deficits provide more information on the longer-term consequences of current government activities, they do not include the longer-term cost associated with social insurance programs like Social Security and Medicare. In addition, they are not designed to provide information about the timing of payments and receipts, which can be very important. Therefore, just as investors need income statements, statements of cash flow, and balance sheets to understand a business’s financial condition, both cash and accrual measures are important for understanding the government’s financial condition. Although looking at both the cash and accrual measures provides a more complete picture of the government’s fiscal stance today and over time than looking at either alone, even these together do not tell us the full story. For example, as shown in table 1, all three of these deficits improved between fiscal year 2005 and fiscal year 2006. This improvement, however, did not result from a change in the fundamental drivers of our long-term challenge and did not signal an improvement in that outlook. To understand the long-term implications of our current path requires more than a single year’s snapshot. In this regard, the long- term outlook has worsened significantly in the last several years. That is why for more than a decade GAO has been running simulations to tell this longer-term story. As I mentioned, it is not the recent past shown in figure 1—nor the outlook for this year—that should concern us. Rather it is the picture in figure 3 that should worry us. Long-term fiscal simulations by GAO, CBO, and others all show that we face large and growing structural deficits driven primarily by rising health care costs and known demographic trends. GAO runs simulations under two sets of assumptions. One takes the legislatively-mandated baseline from CBO for the first 10 years and then keeps discretionary spending and revenues constant as a share of GDP while letting Social Security, Medicare, and Medicaid grow as projected by the Trustees and CBO under midrange assumptions. The other, perhaps more realistic, scenario based on the Administration’s announced policy preferences changes only two things in the first 10 years: discretionary spending grows with the economy and all expiring tax provisions are extended. Like the “Baseline Extended” scenario, after 10 years both revenues and discretionary spending remain constant as a share of the economy. As figure 3 shows, deficits spiral out of control under either scenario. We will be updating these figures with the release of the new CBO baseline later this month, but even with the lower deficit in 2006, the long-term picture will remain daunting. Looking more closely at each scenario gives a fuller understanding of what the impact of continuing these trends would have on what government does. And it shows us “Why Deficits Matter.” First, it makes sense to look back to 2001—it is worth understanding how much worse the situation has become. As I noted, despite some recent improvements in short-term deficits, the long-term outlook is moving in the wrong direction. Figures 4 and 5 show the composition of spending under our “Baseline Extended” scenario in 2001 and 2006. Even with short-term surpluses, we had a long-term problem in 2001, but it was more than 40 years out. Certainly an economic slowdown and various decisions driven by the attacks of 9/11 and the need to respond to natural disasters have contributed to the change in outlook. However, these items alone do not account for the dramatic worsening. Tax cuts played a major role, but the single largest contributor to the deterioration of our long-term outlook was the passage of the Medicare prescription drug benefit in 2003. Figure 5 illustrates today’s cold hard truth, that neither slowing the growth in discretionary spending nor allowing the tax provisions to expire—nor both together—would eliminate the imbalance. This is even clearer under the more realistic scenario as shown in figure 6. Estimated growth in the major entitlement programs results in an unsustainable fiscal future regardless of whether one assumes future revenue will be somewhat above historical levels as a share of the economy as in the first simulation (fig. 5) or lower as shown in figure 6. Both these simulations remind us “Why Deficits Matter.” They illustrate that without policy changes on the spending and revenue side of the budget, the growth in spending on federal retirement and health entitlements will encumber an escalating share of the government’s resources. A government that in our children’s lifetimes does nothing more than pay interest on its debt and mail checks to retirees and some of their health providers is unacceptable. Although Social Security is a major part of the fiscal challenge, contrary to popular perception, it is far from our biggest challenge. While today Social Security spending exceeds federal spending for Medicare and Medicaid, that will change. Over the past several decades, health care spending on average has grown much faster than the economy, absorbing increasing shares of the nation’s resources, and this rapid growth is projected to continue. CBO estimates that Medicare and Medicaid spending will reach 6.3 percent of GDP in 2016, up from 4.6 percent this year (2007), while spending for Social Security will only reach 4.7 percent of GDP in 2016 up from 4.2 percent this year. For this reason and others, rising health care costs pose a fiscal challenge not just to the federal budget but also to states, American business, and our society as a whole. While there is always some uncertainty in long-term projections, two things are certain: the population is aging and the baby boom generation is nearing retirement age. The aging population and rising health care spending will have significant implications not only for the budget but also for the economy as a whole. Figure 7 shows the total future draw on the economy represented by Social Security, Medicare, and Medicaid. Under the 2006 Trustees’ intermediate estimates and CBO’s long-term Medicaid estimates, federal spending for these entitlement programs combined will grow to 15.5 percent of GDP in 2030 from today’s 9 percent. This graphic is another illustration of why we have to act. I do not believe we are prepared to have programs that provide income for us in retirement and pay our doctors absorb this much of our children’s and grandchildren’s economy. It is clear that taken together, Social Security, Medicare, and Medicaid under current law represent an unsustainable burden on future generations. While Social Security, Medicare, and Medicaid dominate the long-term outlook, they are not the only federal programs or activities that bind the future. Part of what we owe the future is leaving enough flexibility to meet whatever challenges arise. So beyond dealing with the “big 3,” we need to look at other policies that limit that flexibility—not to eliminate all of them but to at least be aware of them and make a conscious decision about them. The federal government undertakes a wide range of programs, responsibilities, and activities that obligate it to future spending or create an expectation for spending and potentially limit long-term budget flexibility. GAO has described the range and measurement of such fiscal exposures—from explicit liabilities such as environmental cleanup requirements to the more implicit obligations presented by life-cycle costs of capital acquisition or disaster assistance. Figure 8 shows that despite improvement in both the fiscal year 2006 reported net operating cost and the cash-based budget deficit, the U.S. government’s major reported liabilities, social insurance commitments, and other fiscal exposures continue to grow. They now total approximately $50 trillion—about four times the nation’s total output (GDP) in fiscal year 2006—up from about $20 trillion, or two times GDP in fiscal year 2000. Clearly, despite recent progress on our short-term deficits, we have been moving in the wrong direction in connection with our long-range imbalance in recent years. Our long-range imbalance is growing daily due to continuing deficits, known demographic trends, rising health care costs, and compounding interest expense. We all know that it is hard to make sense of what “trillions” means. Figure 9 provides some ways to think about these numbers: if we wanted to put aside today enough to cover these promises, it would take $170,000 for each and every American or approximately $440,000 per American household. Considering that median household income is about $46,000, the household burden is about 9.5 times median income. Since at its heart the budget challenge is a debate about the allocation of limited resources, the budget process can and should play a key role in helping to address our long-term fiscal challenge and the broader challenge of modernizing government for the 21st century. I have said that Washington suffers from myopia and tunnel vision. This can be especially true in the budget debate in which we focus on one program at a time and the deficit for a single year or possibly the costs over 5 years without asking about the bigger picture and whether the long term is getting better or worse. We at GAO are in the transparency and accountability business. Therefore it should come as no surprise that I believe we need to increase the understanding of and focus on the long term in our policy and budget debates. To that end—as I noted earlier—I have been talking with a number of Members of the Senate and the House as well as various groups concerned about this issue concerning a number of steps that might help. I’ve attached a summary of some of these ideas to this statement. Let me highlight several critical elements here. The President’s budget proposal should again cover 10 years. This is especially important given that some policies—both spending and tax— cost significantly more (or lose significantly more revenue) in the second 5 years than in the first. In addition, the budget should disclose the impact of major tax or spending proposals on the short, medium, and long term. The executive branch should also provide information on fiscal exposures—both spending programs and tax expenditures—that is, the long-term budget costs represented by current individual programs, policies, or activities as well as the total. The budget process needs to pay more attention to the long-term implication of the choices being debated. For example, elected representatives should be provided with more explicit information on the long-term costs of any major tax or spending proposal before it is voted upon. It is sobering to recall that during the debate over adding prescription drug coverage to Medicare, a great deal of attention was paid to whether the 10-year cost was over or under $400 billion. Not widely publicized—and certainly not surfaced in the debate—was that the present value of the long-term cost of this legislation was about $8 trillion! Of course, when you are in a hole, the first thing to do is stop digging. I have urged reinstitution of the statutory controls—both meaningful caps on discretionary spending and pay-as-you-go (PAYGO) on both the tax and spending sides of the ledger—that expired in 2002. However given the severity of our current challenge, Congress should look beyond the return to PAYGO and discretionary caps. Mandatory spending cannot remain on autopilot—it will not be enough simply to prevent actions to worsen the outlook. We have suggested that Congress might wish to design “triggers” for mandatory programs—some measure that would prompt action when the spending path increased significantly. In addition, Congress may wish to look at rules to govern the use of “emergency supplementals.” However, as everyone in this committee knows, these steps alone will not solve the problem. That is why building in more consideration of the long-term impact of decisions is necessary. There is no easy way out of the challenge we face. Economic growth is essential, but we will not be able to simply grow our way out of the problem. The numbers speak loudly: our projected fiscal gap is simply too great. To “grow our way out” of the current long-term fiscal gap would require sustained economic growth far beyond that experienced in U.S. economic history since World War II. Similarly, those who believe we can solve this problem solely by cutting spending or solely raising taxes are not being realistic. While the appropriate level of revenues will be part of the debate about our fiscal future, making no changes to Social Security, Medicare, Medicaid, and other drivers of the long-term fiscal gap would require ever-increasing tax levels—something that seems both inappropriate and implausible. That is why I have said that substantive reform of Social Security and our major health programs remains critical to recapturing our future fiscal flexibility. I believe we must start now to reform these programs. Although the long-term outlook is driven by Social Security and health care costs, this does not mean the rest of the budget can be exempt from scrutiny. Restructuring and constraint will be necessary beyond the major entitlement programs. This effort offers us the chance to bring our government and its programs in line with 21st century realities. Many tax expenditures act like entitlement programs, but with even less scrutiny. Other programs and activities were designed for a very different time. Taken together, entitlement reform and reexamination of other programs and activities could engender a national discussion about what Americans want from their government and how much they are willing to pay for those things. Finally, given demographic and health care cost trends, the size of the spending cuts necessary to hold revenues at today’s share of GDP seems implausible. It is not realistic to assume we can remain at 18.2 percent of GDP—we will need more revenues. Obviously we want to minimize the tax burden on the American people and we want to remain competitive with other industrial nations—but in the end the numbers have to add up. As I noted, we need to start with real changes in existing entitlement programs to change the path of those programs. However, reform of the major entitlement programs alone will not be sufficient. Reprioritization and constraint will be necessary in other spending programs. Finally, we will need more revenues—hopefully through a reformed tax system. The only way to get this done is through bipartisan cooperation and compromise—involving both the Congress and the White House. Delay only makes matters worse. GAO’s simulations show that if no action is taken, balancing the budget in 2040 could require actions as large as cutting total federal spending by 60 percent or raising federal taxes to two times today’s level. For many years those of us who talk about the need to put Social Security on a sustainable course and to reform Medicare have talked about the benefits of early action. Acting sooner rather than later can turn compound interest from an enemy to an ally. Acting sooner rather than later permits changes to be phased in more gradually and gives those affected time to adjust to the changes. Delay does not avoid action—it just makes the steps that have to be taken more dramatic and potentially harder. Unfortunately, it is getting harder to talk about early action—the future is upon us. Next year members of the baby boom generation start to leave the labor force. Figure 10 shows the impact of demographics on labor force growth. Reflecting this demographic shift, CBO projects the average annual growth rate of real GDP will decline from 3.1 percent in 2008 to 2.6 percent in the period 2012–2016. This slowing of economic growth will come just as spending on Social Security, Medicare and Medicaid will begin to accelerate—accounting for 56 percent of all federal spending by 2016 compared to 43 percent in 2006. As I noted earlier, today Social Security’s cash surplus helps offset the deficit in the rest of the budget, thus reducing the amount Treasury must borrow from the public and increasing budget flexibility—but this is about to change. Growth in Social Security spending is expected to increase from an estimated 4.8 percent in 2008 to 6.5 percent in 2016. The result, as shown in figure 11, is that the Social Security surpluses begin a permanent decline in 2009. At that time the rest of the budget will begin to feel the squeeze since the ability of Social Security surpluses to offset deficits in the rest of the budget will begin to shrink. In 2017 Social Security will no longer run a cash surplus and will begin adding to the deficit. That year Social Security will need to redeem the special securities it holds in order to pay benefits. Treasury will honor those claims—the United States has never defaulted. But there is no free money. The funds to redeem those securities will have to come from higher taxes, lower spending on other programs, higher borrowing from the public, or a combination of all three. I spoke before of how big the changes would have to be if we were to do nothing until 2040. Of course, we won’t get to that point—something will force action before then. If we act now, we have more choices and will have more time to phase-in related changes. Chairman Spratt, Mr. Ryan, Members of the Committee—in holding this hearing even before the President’s Budget is submitted you are signaling the importance of considering any proposal within the context of the long- term fiscal challenge. This kind of leadership will be necessary if progress is to be made. I have long believed that the American people can accept difficult decisions as long as they understand why such steps are necessary. They need to be given the facts about the fiscal outlook: what it is, what drives it, and what it will take to address it. As most of you know, I have been investing a good deal of time in the Fiscal Wake-Up Tour (FWUT) led by the Concord Coalition. Scholars from both the Brookings Institution and the Heritage Foundation join with me and Concord in laying out the facts and discussing the possible ways forward. In our experience, having these people, with quite different policy views on how to address our long-range imbalance, agree on the nature, scale, and importance of the issue—and on the need to sit down and work together—resonates with the audiences. Although the major participants have been Concord, GAO, Brookings, and Heritage, others include such organizations as the Committee for Economic Development (CED); the American Institute of Certified Public Accountants (AICPA); the Association of Government Accountants (AGA); the National Association of State Auditors, Comptrollers and Treasurers (NASACT); and AARP. The FWUT also has received the active support and involvement of community leaders, local colleges and universities, the media, the business community, and both former and current elected officials. We have been to 17 cities to-date. The discussion has been broadcast on public television stations in Atlanta and Philadelphia. Earlier this month OMB Director Portman and former Senator Glenn joined us at an event at the John Glenn School of Public Affairs at Ohio State University in Columbus, Ohio. The specific policy choices made to address this fiscal challenge are the purview of elected officials. The policy debate will reflect differing views of the role of government and differing priorities for our country. What the FWUT can do—and what I will continue to do—is lay out the facts, debunk various myths, and prepare the way for tough choices by elected officials. The American people know—or sense—that there is something wrong; that these deficits are a problem. If they understand that there truly is no magic bullet—if they understand that we cannot grow our way out of this problem; eliminating earmarks will not solve the problem; wiping out fraud, waste, and abuse will not solve the problem; ending the war or cutting way back on defense will not solve the problem; restraining discretionary spending will not solve the problem; and letting the recent tax cuts expire will not solve this problem; then the American people can engage with you in a discussion about what government should do and how. People ask me how I think this can happen. I know that some Members believe a carefully structured commission will be necessary to prepare a package while others feel strongly that elected officials should take up the task of developing that package. Whatever the vehicle, success will require the active and open-minded involvement of both parties in and both houses of the Congress and of the President. With that it should be possible to develop a package which accomplishes at least three things: (1) a comprehensive solution to the Social Security imbalance—one that is not preprogrammed to require us to have to come back again, (2) Round I of comprehensive tax reform, and (3) Round I of Health Care Reform. This is a great nation. We have faced many challenges in the past and we have met them. It is a mistake to underestimate the commitment of the American people to their children and grandchildren; to underestimate their willingness and ability to hear the truth and support the decisions necessary to deal with this challenge. We owe it to our country, to our children and to our grandchildren to address this fiscal imbalance. The world will present them with new challenges—we need not bequeath them this burden too. The time for action is now. Mr. Chairman, Mr. Ryan, Members of the Committee, let me repeat my appreciation for your commitment and concern in this matter. We at GAO stand ready to assist you in this important endeavor. For further information on this testimony, please contact Susan J. Irving at (202) 512-9142 or irvings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony include Jay McTigue, Assistant Director; Linda Baker and Melissa Wolf. Produce an annual Statement of Fiscal Exposures, including a concise list and description of exposures, cost estimates where possible, and an assessment of methodologies and data used to produce such cost estimates. Increase the transparency of tax expenditures by including them in the annual Fiscal Exposures Statement and, where possible, also showing them along with spending and credit programs in the same policy area. Provide information on the impact of major tax or spending proposals on short-term, mid-term, and long-term fiscal exposures and on the path of surplus/deficit and debt as percent of gross domestic product (GDP) over 10-year and longer-term horizons (and assuming no sunset if sunset is part of the proposal). Cover 10 years in the budget. Consider requiring the President to include in his annual budget submission a long-term fiscal goal (e.g., balance, surplus, or deficit as percent of GDP). Prepare and publish a Summary Annual Report or Citizen’s Summary that summarizes, in a clear, concise, plain English, and transparent manner, key financial and performance information included in the Consolidated Financial Report. Prepare and publish a report on long-range fiscal sustainability every 2 to 4 years. Require improved disclosure—at the time proposals are debated but before they are adopted—of the long-term costs of individual mandatory spending and tax proposals over a certain size and for which costs will ramp up over time. An annual report or reports by GAO including comments on the Consolidated Financial Statement (CFS), results of the latest long-term fiscal simulations, comments on the adequacy of information regarding long-term cost implications of existing and proposed policies in the previous year as well as any other significant financial and fiscal issues. Use accrual budgeting for the following areas where cash basis obligations do not adequately represent the government’s commitment: employee pension programs (pre-Federal Employee Retirement System employees); retiree health programs; and federal insurance programs, such as the Pension Benefit Guaranty Corporation and crop insurance. Explore techniques for expanding accrual budgeting to environmental cleanup and social insurance—could consider deferring recognition of social insurance receipts until they are used to make payments in the future (this was suggested in GAO’s accrual budgeting report as an idea to explore, possibly with a commission designed to explore budget concepts). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Comptroller General testified before Congress for a hearing entitled "Why Deficits Matter." The presentation touched on several points. First, the current financial condition in the United States is worse than is widely understood. Second, the current fiscal path is both imprudent and unsustainable. Third, improvements in information and processes are needed and can help. And finally, meeting the long-term fiscal challenge will require (1) significant entitlement reform to change the path of those programs; (2) reprioritizing, restructuring and constraining other spending programs; and (3) more revenues--hopefully through a reformed tax system. This will take bipartisan cooperation and compromise. |
Two types of residential mortgage loans are common: fixed-rate mortgages, which have interest rates that do not change over the entire term of the loans, and adjustable-rate mortgages (ARM), which have interest rates that change periodically based on changes in a specified index. Residential mortgages also fall into several loosely defined categories: Prime mortgages are made to borrowers with strong credit histories and provide the most attractive interest rates and loan terms; Near-prime mortgages (also called Alt-A mortgages) generally serve borrowers whose credit histories are close to prime, but the loans have one or more higher-risk characteristics such as limited documentation of income or assets or higher loan-to-value ratios; Subprime mortgages are generally made to borrowers with blemished credit and feature higher interest rates and fees than prime loans; and Government-insured or -guaranteed mortgages primarily serve borrowers who may have difficulty qualifying for prime loans and feature interest rates similar to those for prime loans. HUD’s Federal Housing Administration (FHA), VA, and the Rural Housing Service operate major federal programs that insure or guarantee mortgages. The nonprime market segment (Alt-A and subprime) features a number of nontraditional products and characteristics: Hybrid ARM—interest rate is fixed during an initial period, then “resets” to an adjustable rate for the remaining term of the loan. Payment-option ARM—borrower has multiple payment options each month, including negative amortization (minimum payments lower than needed to cover any of the principal or all the accrued interest, which may increase the outstanding loan balance over time). Interest-only—borrower can pay just the interest on the loan for a specified period, usually the first 3-10 years, thereby deferring principal payments. Low and no documentation loans—require little or no verification of a borrower’s income or assets. High loan-to-value ratios—borrower would make a small down payment, causing the ratio of the loan amount to the home value to be relatively high. The higher the ratio when a loan is originated, the less equity borrowers will have in their homes. Prepayment penalties—borrower incurs a fee if he or she pays off the loan balance before it is due. Balloon payment loans—mortgages that do not fully amortize over the term of the loan, leaving a balance due at the end of the balloon period. Mortgages can fall into any one of several payment status categories: Current—borrower has met scheduled payments. Delinquent—borrower has missed one or more scheduled monthly payments. Default—borrower is 90 or more days delinquent. Foreclosure—borrower has been delinquent for more than 90 days and the lender has elected to initiate a legal process against the borrower that has several possible outcomes. Generally, the borrower loses the property because it is sold to repay the outstanding debt or is repossessed by the lender. Prepaid—borrower has paid the entire loan balance before it is due. Prepayment often occurs as a result of the borrower selling the home or refinancing. After the loan has been made, originating lenders can retain their loans in portfolio or sell them to investors on the secondary market, either as whole loans to other financial institutions or (directly or indirectly through other financial institutions) as loan pools that are held in trusts and administered by a trustee. The loan pools become asset-backed securities that are issued and sold to investors and are referred to as mortgage-backed securities. This process, often referred to as securitization (see fig. 1), plays an important role in providing capital for mortgage lending by generating funds that can be used to originate more loans. Investors assume the interest rate, prepayment, and credit risk associated with the loans backing these securities, unless they are covered by mortgage insurance or guarantees on the securities. The secondary market for residential mortgages consists of three major categories of securitizations—enterprise (Fannie Mae and Freddie Mac), Ginnie Mae, and private label. Fannie Mae and Freddie Mac are congressionally chartered, for-profit, shareholder-owned companies known as government-sponsored enterprises and have been under federal conservatorship since 2008. They generally purchase conforming loans, which are mortgage loans that meet certain criteria for size, features, and underwriting standards. In addition, the enterprises require that loans they purchase with loan-to-value ratios in excess of 80 percent have a credit enhancement mechanism, such as private mortgage insurance. Loans above this limit are known as jumbo loans. After purchasing mortgages, the enterprises create mortgage-backed securities and guarantee investors in these securities that they will receive timely payments of principal and interest. Ginnie Mae (a government corporation) guarantees securities that are issued by approved private institutions and backed by federally insured mortgages (FHA, VA, and USDA). Private institutions are also involved in the creation of private- label securities backed by mortgages that do not conform to the enterprises’ purchase requirements (because the mortgages are too large or do not meet specified underwriting criteria). Private securitizing institutions include investment banks, retail banks, mortgage companies, and real estate investment trusts. Other participants to a private securitization transaction include, but are not limited to, credit rating agencies that assess the creditworthiness of the securities and deal with underwriters hired by securitizers to market and sell the securities to investors. Each type of securitization retains a mortgage servicer to collect mortgage payments from borrowers and disburse interest and principal payments to mortgage trustees, who pass them to investors. Servicers also manage delinquent loans that may lead to loss mitigation (such as a loan modification or a repayment plan) with the borrower or foreclosure. The ATR/QM regulations set forth minimum requirements lenders must consider in relation to making the required good faith determination of a consumer’s reasonable ability to repay. To satisfy the ability-to-repay requirements, lenders generally must consider eight underwriting factors: (1) current or reasonably expected income or assets; (2) current employment status; (3) the monthly payment on the covered transaction (the monthly payment must be calculated based on any introductory rate or fully indexed rate for the loan, whichever is higher, and substantially equal, fully amortizing monthly payments); (4) the monthly payment on any simultaneous loan; (5) the monthly payment for mortgage-related obligations; (6) current debt obligations, alimony, and child support; (7) the monthly debt-to-income ratio or residual income; and (8) credit history. To satisfy the QM requirements, the loan must meet certain restrictions on product features and points and fees as well as meet certain underwriting requirements. The loan must not have risky features such as negative amortization, interest-only payments, or balloon payments (except in certain circumstances). The term of the loan should not exceed 30 years. Point and fees should be less than or equal to 3 percent of the loan amount (higher percentages are allowed for loans of less than $100,000). Finally, the loan also must meet certain underwriting requirements. The creditor must take into account the monthly mortgage payment utilizing a fully amortizing schedule using the maximum rate that may apply during the first 5 years after the first payment. The creditor must consider and verify income or assets and current debt obligations, alimony, and child support. The rule also sets out three main categories of QMs that are presumed to comply with the ability-to-repay requirements: general, temporary, and small creditor. Under the general category, all loans to borrowers with a monthly debt-to-income ratio of 43 percent or less that meet the restrictions on product features, points and fees, and underwriting requirements described above are QMs. Under the temporary category, loans that meet the restrictions on product features, and points and fees described above, and are eligible for purchase, insurance, or guarantee by Fannie Mae, Freddie Mac, FHA, USDA and its Rural Housing Service, or VA are QMs, but are not subject to a specific debt-to-income ratio. Under the small-creditor category, loans must meet some restrictions on QMs such as product features and points and fees. Creditors must evaluate consumers’ debt-to-income ratio or residual income, but the loans are not subject to a specific debt-to-income ratio. Generally, these loans must be held in portfolio by a small creditor for at least 3 years. However, there is another category for small creditors in rural and underserved areas in which mortgages with balloon payments originated by such creditors can be QM loans. If a lender originates a mortgage that meets the QM requirements and has an annual percentage rate (APR) within certain limits, the lender is presumed to have satisfied the ability-to-repay requirements and receives certain protections from liability. That is, these QMs have a safe harbor (a conclusive presumption that the lender has satisfied the ability-to-repay requirements) that immunizes the lender from claims related to the borrower’s ability to repay. Lenders still can receive some protection from liability if they originate higher-priced QMs (those with APRs above certain limits). That is, lenders are still presumed to have satisfied the ability-to-repay requirements, but borrowers can rebut the presumption. Borrowers can try to prove that based on information available to the lender at loan origination, the borrower would not have enough income left for living expenses after paying the mortgage and other debts. Lenders also may make non-QM loans if they choose. However, these lenders will not benefit from the safe-harbor or rebuttable presumption liability protections afforded QM loans. The Dodd-Frank Act generally requires securitizers of asset-backed securities to retain not less than 5 percent of the credit risk of the assets collateralizing the security. The act includes exemptions, including one for securities collateralized exclusively by residential mortgages that are “qualified residential mortgages.” The Dodd-Frank Act specifies that the QRM definition cannot be broader than the QM definition (that is, the QRM criteria can be more but not less restrictive than the QM criteria). The act also requires agencies to specify criteria for QRMs, taking into consideration underwriting and product features that historical loan performance data indicate result in a lower risk of default; permissible forms of risk retention and the minimum duration for ways of allocating risk between securitizers and originators; and the possibility of permitting a lower risk-retention requirement (less than 5 percent) for any securitization collateralized by non-QRMs that meet underwriting standards the agencies develop in regulations. In the final risk-retention rule, issued in December 2014, the QRM definition was aligned with the QM definition.that meet the QM requirements outlined previously are considered to be QRM loans. Thus, securities collateralized solely by QM loans (and therefore QRM loans) are not subject to risk-retention requirements. Congress intended the risk-retention regulations to help address problems in the securitization markets by requiring securitizers to retain an economic interest in the credit risk of certain assets they securitized. As a result, securitizers would have an incentive to monitor and ensure the quality of the assets underlying a securitization transaction, which also would help align their interests with the interests of investors. More specifically, loans In relation to risk retention, sponsors of securitizations will be required to retain at least 5 percent of the credit risk associated with a securitization that contains any non-QRM loans, unless an exemption applies. Under certain circumstances, sponsors may allocate the retention obligation to an originator, which agrees to retain that risk, if the originator has contributed at least 20 percent of the balance of a loan pool collateralizing mortgage-backed securities. The final rule requires this risk to be held by originators in the same way the risk was held by the securitizer. The Dodd-Frank Act transferred consumer protection oversight and other authorities over certain consumer financial protection laws from multiple federal regulators to CFPB. CFPB’s responsibilities include ensuring that consumers are provided with timely and understandable information to make responsible decisions about financial transactions; ensuring that consumers are protected from unfair, deceptive, or abusive acts and practices, and from discrimination; and ensuring that markets for consumer financial products and services operate transparently and efficiently to facilitate access and innovation. The Dodd-Frank Act also gave CFPB supervisory authority over certain nondepository institutions, including certain kinds of mortgage market participants. Such institutions generally lacked federal oversight before the financial crisis of 2007–2009. Finally, the Dodd-Frank Act requires CFPB to conduct an assessment of each significant rule it adopts, such as the ATR/QM rule, and publish a report of the assessment no later than 5 years after the effective date of the rule.2019. The factors the assessments are to address include the rule’s effectiveness in meeting the purposes and objectives of Title X of the Dodd-Frank Act. Generally, Executive Orders and related implementation guidance from OMB require executive agencies and encourage independent regulatory agencies to develop and implement retrospective review plans. In addition, OMB encourages agencies to preplan efforts to retrospectively review their regulations and give careful consideration about how best to promote empirical testing of the effects of rules both in advance and retrospectively. OMB states that agencies may find it useful to engage in retrospective analyses of the costs and benefits (quantitative and qualitative) of regulations and suggests that independent regulatory agencies identify metrics to evaluate regulations and ensure they have high-quality data and robust models to conduct effective outcome-based reviews. These directives and guidance also encourage agencies to solicit public comments and make the results of these reviews available to the public. Finally, agencies are encouraged to coordinate when conducting their retrospective reviews and consider the combined effects of their regulations. During 2000-2014, originations for residential mortgage loans rose dramatically, then plummeted, and showed some signs of recovery in recent years. Available data indicate that low levels of riskier loan types have been originated since 2007. Additionally, measures of credit risk associated with mortgages, such as borrower credit scores and debt-to- income ratios, were consistent with an overall tightening of loan underwriting standards since 2008. The composition of the securities market for residential mortgages also changed during this period; in particular, the market share for private-label securities significantly diminished after 2007. As shown in figure 2, mortgage origination volume peaked in 2003, sharply declined in 2008, and then remained above 2008 levels (with mixed increases and declines) through 2013 but declined in 2014—due to declines in refinancing. In dollar terms, origination volume declined from $3.7 trillion in 2003 to $1.2 trillion in 2014. The lower volume potentially indicates lower credit availability, decreased demand, or both. A range of factors contributed to mortgage market activity from 2000 through 2014. Refinances. During the years of rapidly increasing mortgage origination (2000–2003), decreasing interest rates and increasing home prices provided opportunities for borrowers to refinance to lower monthly payments or take equity out of their homes for consumption and investment. As shown in figure 2, the volume of refinances increased to $2.8 trillion in 2003, and then decreased and remained at roughly $1.5 trillion from 2004 through 2007. Refinances as a percentage of mortgage originations peaked at 76 percent in 2003 and remained at roughly 70 percent from 2009 to 2013. Refinances declined to 44 percent in 2014. Similarly, the number of subprime cash-out refinances increased significantly from 2000 through 2005 (from about 246,000 in 2000 to about 1.2 million in 2005) and then declined to about 195,000 in 2007. As part of monetary policy, the Federal Reserve, through the Federal Open Market Committee, sets the federal funds rate at a level it believes will foster financial and monetary conditions consistent with achieving its monetary policy objectives, and it adjusts that target in line with evolving economic developments. availability of credit for the purchase of houses.” The large-scale purchase program was an integral component of the Federal Reserve’s efforts to ease financial conditions and provide policy accommodation in the crisis. Starting in September 2007 and on several occasions afterward, the Federal Reserve reduced the federal funds rate with the last reduction occurring in December 2008. Decreasing or increasing the federal funds rate—the rate at which depository institutions lend to other depository institutions overnight—can influence the cost and supply of credit, including mortgages. Mortgage rates are generally a product of the supply of and demand for mortgages. Other factors, such as the prevalence of mortgage defaults, unemployment rates, and home prices also can determine the supply and demand for mortgages and thus also influence costs. Default and foreclosure. Default and foreclosure rates peaked in 2010 but trended downward through 2013. From 2000 through 2006, mortgage performance was relatively stable. The rate of default was below 1 percent, and the foreclosure inventory rate—the percentage of total mortgage loans in foreclosure—was below 2 percent (see fig. 2). These rates then rose to historic levels, the default rate reaching nearly 5 percent and the foreclosure inventory rate reaching 4.6 percent in the first quarter of 2010. Through 2013, the rates declined, suggesting some recovery in the housing and mortgage market. But at the end of 2013, the foreclosure inventory rate remained at 2.9 percent, according to data from the Mortgage Bankers Association. As we reported earlier, more aggressive lending practices—that is, an easing of underwriting standards and wider use of certain loan features associated with poorer loan performance—contributed to the increases in default and foreclosure rates that began in the third quarter of 2006.house prices left borrowers more likely to have negative equity (owing more on a mortgage loan than the property is worth), which also contributed to the increases in defaults. Higher default rates may result in higher total losses for lenders on their loans. Originations of riskier loan types declined to low levels after 2007. For example, the share of nonprime mortgages (Alt-A and subprime) decreased from about 40 percent in 2006 to less than 5 percent in 2008 (see fig. 3). As noted in our 2009 report, the nonprime market segment featured a number of nontraditional products and characteristics.of the features of these products, such as low or no documentation of borrower income and assets, are prohibited or limited under the final ATR/QM rule. Since the decline in originations of riskier loan types, the market share of other loan types and products increased. For example, conforming and government-insured or guaranteed loans constituted the majority of loan originations from 2008 through 2014 (see fig. 3). More specifically, such loans accounted for about 80 percent or more of the market during this period. Furthermore, the share of jumbo loans hit a low in 2009 (5.5 percent), and then generally increased through 2014 (20.1 percent). Although larger than the conforming loan limit established by the enterprises, jumbo mortgages are generally considered prime mortgages and not Alt-A or subprime. Although the data on ARMs with risky features are limited, those data suggest that the availability of these features declined between 2005 and 2007. For example, originations of subprime ARMs and Alt-A option ARMs increased rapidly from 2000 through 2005, but fell markedly in subsequent years. About 262,000 subprime ARMs were originated in 2000, but this number grew seven-fold to about 1.8 million originations in 2005 (the peak of the market for subprime ARMs). By 2007, the number of these loans declined to about 214,000. Likewise, originations of Alt-A ARMs increased substantially, from about 10,000 loans in 2000 to more than 893,000 in 2005, but declined to about 249,000 by 2007. These nontraditional loans generally had fixed interest rates for short initial periods and then would convert to indexed rates higher than traditional ARMs—which could result in payment shock (large increases in monthly payments). Also, some lenders may have determined a borrower’s ability to repay an ARM based on the initial monthly payment, rather than the higher payments if rates were to increase. As we and others have found, subprime hybrid and Alt-A option ARMs had significantly higher rates of serious delinquency (in default or foreclosure) than other subprime and Alt-A loans. Since 2008, measures of the credit risk of purchase mortgages (such as borrower credit scores and debt-to-income ratios) were consistent with lenders tightening underwriting standards. Underwriting standards, such as those of FHA and the enterprises, include assessments of these measures. For example, the enterprises have a debt-to-income ceiling of 45 percent. A credit score is a numeric value that represents a borrower’s potential credit risk, based on his or her credit history. Generally, a higher score indicates greater credit quality and potentially lower likelihood of default. Lenders continue to use credit scores as a primary means of assessing whether to originate a loan to a borrower. As shown in figure 4, credit scores for purchase loans fluctuated but exhibited an upward trend since 2004. For example, average scores for these borrowers rose from 704 in January 2004 to 750 in December 2013. As shown in figure 4, the average debt-to-income ratio for purchase loans increased to a high of about 40 percent in early 2008 and subsequently decreased to 34 percent in December 2013. Lenders use debt-to-income ratio as a key indicator of a borrower’s capacity to repay a loan. The ratio represents the percentage of a borrower’s income that goes toward all recurring debt payments, including the mortgage payment. A higher ratio is generally associated with a higher risk that the borrower will have cash flow problems and may miss mortgage payments.income ratios is consistent with a tightening of credit availability for borrowers with higher debt burdens. However, the data provider and others have noted that the data are often missing debt-to-income information and debt-to-income ratios are often calculated inconsistently. A decline in debt-to- Nonetheless, research by CoreLogic suggests that lenders in recent years originated loans with lower debt-to-income ratios. Finally, average (mean) loan-to-value ratios for purchase loans increased since 2006. For example, from January 2003 to August 2006, average monthly loan-to-value ratios hovered around 80 percent. From September 2006 to November 2009, average monthly loan-to-value ratios increased about 5 percentage points to 85.6 percent. Since then, the ratios declined slightly, but remained higher than 2003–2006 levels. The continuing prevalence of higher ratios may be due in part to the increasing share of originations from FHA, which had an average loan-to-value ratio of about 96 percent for purchase loans originated from October 1999 to July 2014. of borrowers obtaining both first- and second-lien mortgages— ”piggyback” loans—that may not be reflected in the previous statistics.The higher the loan-to-value ratio when a loan is originated, the less equity borrowers will have in their homes and the more likely they are to default on mortgage obligations especially during times of financial stress. FHA provided the average loan-to-value ratio data. 2005 and 2006.less than 1 percent of the market. Overall, agencies, market participants, and observers estimated that the QM and QRM regulations would not have a significant effect initially because many loans made in recent years already met QM and QRM criteria before these regulations were promulgated. Our review of economic analyses showed that researchers estimate that the majority of mortgages originated in recent years likely would have met the requirements for QM and QRM loans. The recently finalized risk-retention rule aligns the definition of QRM with QM. Estimates from the studies we examined pertaining to loans made in recent years suggest that if lenders continued current practices, a The studies majority of loans would meet requirements for QM loans.used a variety of methodologies, including trend analysis with historical data, comparisons with established baselines, and surveys of market participants. Furthermore, in a July 2011 report, we found that the majority of loans originated from 2001 through 2010 would have met most of the individual QM criteria. The Federal Reserve estimated that 14 percent of Fannie Mae and Freddie Mac refinance mortgages in 2010 had debt-to-income ratios above 43 percent. In comparison, the Federal Reserve estimated that 25 percent of Fannie Mae and Freddie Mac purchase mortgages and 31 percent of Fannie Mae and Freddie Mac refinance mortgages in 2006 had debt-to-income ratios above 43 percent. Board of Governors of the Federal Reserve System,Mortgage Market Conditions and Borrower Outcomes: Evidence from the 2012 Data and Matched HMDA-Credit Report Data, Federal Reserve Bulletin vol. 99, no. 4 (Washington, D.C.: November 2013). underserved areas can make mortgages with balloon payments and those mortgages can qualify as QM loans. Although the QM regulations were not expected to have a significant effect on the overall mortgage market initially, some researchers and participants estimated that they could adversely affect certain borrowers. For example, a study we reviewed indicated that a narrow debt-to-income threshold may disproportionately affect minorities and people living in high-cost areas. Specifically, the higher cost of housing in certain areas can increase these ratios. In addition, some market participants raised the concern that lenders might be restricting lending to borrowers near the 43 percent debt-to-income or the 3 percent points and fees thresholds, out of concern that calculation errors could result in a non-QM loan. Due to data limitations, the researchers were not able to identify the entire universe of QM loans. all defaults during this period. However, the study also found a number of performing loans that did not appear to meet the QM standard. Specifically, 25 percent of nondefaulting mortgages made between 2005 and 2008 did not appear to meet the QM standards. Some observers noted that because the QM standards do not include a measure of creditworthiness (such as credit score) or a loan-to-value ratio requirement, some QM loans may have characteristics associated with higher default rates. As we reported in 2005 and 2010, loans with higher credit scores, lower loan-to-value ratios, or both perform better than loans with low credit scores, higher loan-to-value ratios, or both, all else being Because non-QM loans present higher liability risks, lenders may equal. impose stricter underwriting requirements for those loans, such as higher credit scores, lower loan-to-value ratio thresholds, or both. GAO, Mortgage Financing: Actions Needed to Help FHA Manage Risks from New Mortgage Loan Products GAO-05-194 (Washington, D.C.: Feb. 11, 2005); and Nonprime Mortgages: Analysis of Loan Performance, Factors Associated with Defaults, and Data Sources, GAO-10-805 (Washington, D.C.: Aug. 24, 2010); and GAO-08-78R. regulations would have a measurable reduction in credit availability, and two-thirds of respondents characterized the impact as moderate. The survey found mixed expectations on whether availability of all or only certain segments of mortgages would decline in response to the QM regulations. For example, 41 percent expected a reduction across all mortgages, and 40 percent of lenders expected a reduction only in non-QM lending. Furthermore, a third of lenders reported that they planned to restrict lending to QM segments only, and 29 percent indicated that they primarily would originate QM loans and only originate non-QM loans in targeted markets. The Federal Reserve administered a survey of senior loan officers in July 2014 in which loan officers reported that approval rates decreased for some mortgage types in response to the ATR/QM regulations. The survey found that the reductions in approval rates were often smaller for larger banks. Among the surveyed banks, the majority stated that approval rates did not decline for prime conforming loans, but about a third reported a reduction. (Prime conforming loans include loans eligible for purchase by the enterprises—which include loans automatically designated as QM.) Among all banks surveyed that made nontraditional mortgages, more than half indicated that loan approval rates were lower for nontraditional purchase mortgages—which are often non-QM due to their product features—because of the ATR/QM regulations. Finally, more than half of the respondents indicated that the QM regulations had reduced application approval rates for prime jumbo home- purchase loans. However, in January 2015, another Federal Reserve survey of senior loan officers found that several large banks had eased lending standards for a number of categories of residential mortgages over the preceding 3 months, about 12 to 13 percent of the large banks surveyed indicated an easing of credit standards for QM and non-QM jumbo loans. An article posted by the Urban Institute in August 2014 examined the effect the QM regulations might have on certain borrower and loan characteristics—such as borrowers with debt-to-income ratios above 43 percent, interest-only loans, adjustable-rate mortgages, and loans with small loan amounts—finding little variation in the proportion of such loans before and after implementation of QM regulations.example, the share of loans with debt-to-income ratios above 43 percent remained relatively steady at approximately 17 percent for Fannie Mae and Freddie Mac loans, 35 percent for Ginnie Mae loans, and 10 percent for bank portfolio loans. However, from January through July 2014, the share of loans with higher debt-to-income ratios declined slightly for enterprise loans. These statements and observations were made shortly after the QM regulation became effective. Market participants with whom we spoke stated that the QM standards were unlikely to have a significant effect on the securitization of residential mortgages, largely because the majority of loans originated were expected to be QM loans. Representatives of credit rating agencies with whom we spoke indicated that they did not plan to require any additional credit enhancements when rating securities backed solely by QM safe-harbor loans. Market observers, including two credit rating agencies also told us that there had been a relatively small volume (number and size) of private- label securitizations recently, consistent with the overall securitization trends we noted earlier. According to one of the larger credit rating agencies, the market issued 27 residential mortgage-backed securities in 2014, most of which contained only QM loans. Another larger credit rating agency told us that it rated 10 prime residential mortgage-backed securities in 2014 that included QM loans. Although three of the ten included non-QM loans, the proportion of non-QM loans was never greater than 2 percent in any transaction. Neither rating agency believed that a completely non-QM transaction was rated in 2014. Some observers told us that many non-QM loans originated after the QM regulations became effective had been held in portfolio, indicating they had not been securitized. However, observers noted that some securities have included non-QM loans and firms have discussed creating non-QM securities in the future. According to federal agency officials, the primary costs associated with the QM regulations are increased litigation and compliance costs. Generally, lenders, investors, and borrowers incur litigation costs when borrowers file a legal claim challenging a lender’s efforts to assess the borrower’s ability to repay. Lenders incur compliance costs to ensure that they comply with QM regulations, such as by documenting their efforts to assess borrowers’ ability to repay. Estimates for potential litigation costs associated with the QM regulations varied. Lenders’ costs may increase due to potential litigation costs. The absence of safe harbor protection exposes higher-priced QM and non- QM loans to increased litigation risk. Both CFPB and credit rating agencies estimated increased litigation costs associated with non-QM loans. In contrast to CFPB, credit rating agencies also estimated increased litigation costs associated with higher-priced QM loans. However, CFPB stated that its estimated costs for nonqualified mortgages “should reasonably serve as an upper bound for the costs of qualified mortgages.” CFPB’s estimate assumed that 20 percent of borrowers in foreclosure with non-QM loans would challenge a lenders’ compliance with the ability-to-repay regulations. In contrast, the estimates of credit rating agencies about the probability of litigation ranged from 5 to 50 percent among borrowers in foreclosure with non-QM loans. Most significantly, the credit rating agencies considered if the borrower was located in a nonjudicial or judicial state. CFPB also assumed that 20 percent of the borrowers challenging the lender would prevail in litigation. In contrast, the credit agencies’ estimates for borrower success ranged from 10 to 75 percent. Credit rating agencies estimates differed from CFPB’s because of the different assumptions and methodologies used in their analyses. Depending on the risk and costs to lenders associated with any additional litigation, they might manage these costs by passing them to borrowers in the form of higher loan costs or limiting the volume of loans originated that likely would be subject to litigation risk. For example, CFPB estimated that the potential for increased litigation costs would cause interest rates for non-QM loans to increase by approximately 2.5 basis points. However, CFPB did not generate a similar estimate for high- priced QM loans. Following CFPB’s rule, the credit rating agencies published credit enhancement adjustments, which are used to offset potential investor losses due to increased risk of litigation, for high-priced QM and non-QM loans. Fitch estimated an adjustment of 65 basis points for high-priced QM loans and 40 basis points for non-QM loans. In contrast, Standard and Poor’s estimated an adjustment of 9 basis points for high-priced QM loans and 30 basis points for non-QM loans. Likewise, the addition of these credit enhancements ultimately may increase the cost of funding these loans. The effect would be difficult to estimate because it is largely dependent on future housing market conditions, including the level of competition among lenders and among securitizers. Although these estimates provide insights about the costs associated with the QM regulations, agency officials and observers with whom we spoke said that the estimates were limited by the unique legal requirements for originators and investors under the Dodd-Frank Act that we discussed earlier. The observers noted that they expected to revise their estimates when litigation had taken place. Thus, the actual litigation costs associated with QM may not be known for some time. Market participants and industry observers did not believe that compliance costs associated with the ATR/QM regulations would hinder the functioning of the overall market, but they identified compliance costs that were likely to be passed to consumers. For example, they noted that complying with the documentation standards creates additional work and adds processing time, both of which result in increased costs. Costs also could rise if institutions needed to take additional steps to properly disclose information in their financial statements about QM and non-QM loans. But market participants also noted that compliance costs may vary by institution and the degree to which an institution could realize certain economies of scale. Some indicated that compliance costs were significant for all originators, regardless of size, but added that these costs were related to more than just QM regulations, and included implementing Basel III standards. According to agency officials and observers, the QRM regulations, which were finalized in December 2014, were unlikely to have a significant effect on the availability of residential mortgages under current market conditions. A loan meeting QM standards automatically is QRM-eligible; therefore, securities collateralized solely by QM loans will not require securitizers to retain any of the risk. Securitizers generally must retain at least 5 percent of the credit risk associated with any securitization collateralized by any non-QRM loans. Securitizers may allocate the retention obligation to an originator, if the originator has contributed at least 20 percent of the balance of a loan pool collateralizing mortgage- backed securities. As discussed earlier, agency officials and market observers anticipate that the majority of loans will conform to QM standards and therefore believe that the QRM regulations will not have a substantial effect on the availability of residential mortgages for most borrowers. Since the risk-retention rule equated QRM with QM, mortgage market participants likely would incur few or low additional costs, if any, in ensuring that loans met the definition of QRM. As discussed earlier, the primary costs associated with the QM regulations are litigation and compliance costs. These additional costs may be passed to borrowers. To ensure compliance with the QRM regulations, lenders and securitizers might need to take additional steps to properly disclose information in their financial statements about QRM and non-QRM loans. According to the regulators, aligning the QRM definition with QM would meet the statutory goals and directives to limit credit risk and preserve access to affordable credit, while at the same time facilitating compliance. Specifically, the agencies in the final QRM regulations noted that the markets for those residential mortgages exempted under the final rule (that is, QRM mortgages) are expected to be large, and result in significant liquidity, economies of scale, and little-to-no impact on securitization of these mortgages. For non-QRM securities, the Federal Reserve estimated in October 2014 that a risk-retention requirement of 5 percent would add 25 basis points at most to a borrower’s costs. However, studies we reviewed and mortgage market participants with whom we spoke did not believe these costs would disrupt mortgage market function. The effect of the QRM regulations on the securitization of residential mortgages is likely to be limited in the current market. By equating the definition of QRM with QM, the majority of loans currently being originated likely would be considered to be QRM-eligible and, therefore, not subject to risk retention. However, changes to the role of the federal government in relation to the structure of the market for residential mortgage-backed securities could change the expected effects of the QRM regulations. The final QRM regulations exempt certain securitizations from the risk- retention requirements, including securitizations that have the full guarantee of the enterprises and securitizations guaranteed by Ginnie Mae. Mae and Freddie Mac operate under federal conservatorship. This consideration is separate from the QM temporary exemption for enterprise loans discussed earlier.Finance, the enterprises had a dominant share of the residential securitization market during 2013, with about 66 percent of mortgage originations made through the enterprises. Ginnie Mae guaranteed about 22 percent of the residential securitization market in 2013. 79 Fed. Reg. 77749(§__.8) (Dec. 24, 2014) and 79 Fed. Reg. 77761 (§__.19(b)(1)) (Dec. 24, 2014). small compared with the volume and size of such securitizations from 2005 through 2007. According to the credit rating agencies with which we spoke, the majority of the loans that make up these recent securities met the QM definition and therefore would be considered to be QRM-eligible. Observers estimated that non-QRM loans sold to the private-label secondary market likely would be low-risk loans, such as interest-only loans to high-wealth borrowers. However, some have cautioned that the equation of QM and QRM might restrict the secondary market for non-QRM loans and therefore limit the origination of these loans. For example, the risk-retention rule states that “the agencies recognize that aligning the QRM and QM definitions has the potential to intensify any existing bifurcation in the mortgage market that may occur between QM and non-QM loans, as securitizations collateralized by non-QMs could have higher funding costs due to risk- retention requirements in addition to potential risk of legal liability under the ability-to-repay rule.” The agencies acknowledged this risk but decided that not aligning QRM and QM definitions likely would result in even greater segmentation in the securitization market and higher costs for consumers. Furthermore, the final risk-retention rule requires that securitizations with blended pools of QRM and non-QRM loans be subject to the risk-retention requirements. As noted in the preamble to the rule, the QRM agencies (FDIC, Federal Reserve, FHFA, HUD, OCC, and SEC) anticipated that “QM and non-QM loans are less likely to be combined in a pool because of the different risk profiles and legal liabilities associated with these loans.” Some industry observers pointed out that the small volume of non-QRM loans, which will be subject to risk- retention requirements, may not be sufficient to result in a fully functioning securitization market for such loans. Similarly, the preamble to the rule states that “securitization typically is a more cost-effective source of funding when the underlying pool includes a large number of loans.” Although some private-label securities included both QM and non-QM loans in the same securitization in 2014, one rating agency noted that it had not rated any transactions that consisted entirely of non-QM loans and did not believe that such a transaction had closed during 2014. At least one sponsor plans to create a mortgage-backed security that is wholly non-QRM, according to one rating agency. Although some lenders have been making and holding non-QRM loans in their portfolios, the lack of a robust market for non-QRM securities may limit some lenders’ willingness to underwrite non-QRM loans. Some investors expressed the concern that the adopted QRM regulations did not increase investor protections for higher-risk loans that were QM- eligible. Specifically, the QRM regulations permit security sponsors to include QM-eligible loans with high-risk characteristics, such as high loan- to-value ratios and low credit scores, without imposing a risk-retention requirement. The risk-retention rule does not incorporate requirements for a loan-to-value ratio or a borrower’s credit history because of concerns that the additional requirements might disproportionately affect low- and moderate-income, minority, or first-time homebuyers. Furthermore, the agencies believe the QRM requirements appropriately minimize regulatory compliance burdens in the origination of residential mortgage loans. According to an institutional investor advisor, investors would prefer to rely on risk retention as a method for holding mortgage originators and securitizers accountable. Outside of the risk- retention/QRM regulations, investors now have access to additional information that they could use to require sponsors to retain some of the credit risk of loans that make up the mortgage-backed security. Previously, investors typically lacked detailed information about the pool of loans that made up securities. However, SEC recently revised regulations for registered offerings of asset-backed securities to require that certain loan-level information for residential mortgage-backed securities (among other asset classes) be made available at the time of the offering and on an ongoing basis. Due to the unavailability of certain important data elements, researchers faced challenges when analyzing the short-term and long-term potential effects of the QM and QRM regulations. Similarly, we previously reported that this issue makes evaluating the potential effects of the QM and QRM regulations difficult, as detailed in the following examples. Debt-to-income ratios are key elements to identify QM and non-QM loans. However, as we and others have found, this information is often unreliable or missing. Datasets frequently do not contain debt- to-income information for subprime and Alt-A loans, and available data often may be unreliable. Information on the points and fees borrowers incur are also key elements to identify QM and non-QM loans. However, this information is not maintained in any available database, according to agency officials and observers. Without this information, it is difficult to determine if a loan complied with the QM requirement for a 3 percent cap on points and fees. For these reasons, conclusively identifying the universe of QM loans is difficult. Instead, the studies must rely on other indicators of QM loans such as the lack of certain prohibited features or markers for the loan being fully documented. Researchers also often faced challenges establishing a baseline for assessing the effect of the QM regulations. As discussed earlier, the housing market is highly cyclical, but the early 2000s saw a major expansion in many segments of the market. As such, the choice of a baseline can significantly affect a study’s findings. For example, choosing an immediate precrisis baseline may make it appear that regulations were having a larger effect than they would with a postcrisis baseline. Baseline choices can result in different findings on the potential future effect of QM. Mortgage market participants also told us that it would be difficult to isolate the effect of the regulations on the availability of mortgages because of other changes affecting the mortgage market. For example, many mortgage originators are also subject to the new CFPB servicing requirements. As a result, it is difficult to attribute any changes observed in the mortgage market directly to the finalization of the QM and QRM regulations. The long-term implications of the QM and QRM regulations on the mortgage market depend on several factors that are difficult to predict. For example, lender willingness to make non-QM loans (particularly to certain borrowers such as those with high debt-to-income ratios) and the cost of these loans are unknown. In addition, the future role of the enterprises in the residential mortgage market has yet to be determined and the mortgage activities of federal agencies may change (many proposals have been introduced to change the single-family housing finance system). Moreover, the QM and QRM regulations may change over time. For example, CFPB took action to expand the exemption for small lenders after the rule had been finalized. Finally, the activities of nongovernmental and private participants can change over time In a 2014 report that assessed protection for mortgage securities investors, we found that the ATR/QM regulations might set a floor to the loosening of credit and help prevent a repeat of the deterioration of lending standards that contributed to the 2007–2009 financial crisis. The QM and QRM regulations provide incentives to originate QM and QRM loans. For example, originating a QM loan provides litigation protection for the lender and assignee if the loan is sold to an investor. Similarly, securitization sponsors are not required to retain any portion of the credit risk of QRM loans if the securitization exclusively comprises QRM loans. Should underwriting standards begin to loosen and lenders become more willing to offer loans that do not meet QM or QRM standards, these incentives may deter some lenders from loosening standards beyond the limits specified in the regulations. Although the regulations may help limit high-risk mortgage lending in future market expansions, some activities are not forbidden by statute (for example non-QM mortgage loans still can have negative amortization and interest-only payments). Nonetheless, lenders must assess the borrowers’ ability to repay for all loans, including any non-QM loans lenders may originate and sponsors may securitize. CFPB and HUD have begun planning for their reviews of the QM regulations. CFPB identified potential outcomes, data sources, and analytical methods for examining its QM regulations, but had not finalized its plans. HUD identified outcomes and potential data sources, but had not identified specific metrics, baselines, and analytical methods for examining its regulations. The agencies responsible for the QRM regulations identified outcomes and potential data sources and analytical methods, but had not yet identified specific metrics and baselines for examining the QRM regulations. In response to the Dodd-Frank requirement to review significant rulemakings, CFPB has made efforts to identify data, but as of May 2015 had not finalized a plan that specified what outcomes and methodologies—such as metrics, baselines, and analytical methods—it will use to examine the effects of the QM regulations. CFPB discussed some potential plans to review the QM regulations in the final ATR/QM rule but has not since finalized a plan for its analysis. The Dodd-Frank Act requires CFPB to assess “the effectiveness of the rule or order in meeting the purposes and objectives of this title [Title X— Bureau of Consumer Financial Protection] and the specific goals stated by the Bureau.” Furthermore, Executive Order 13563 states that the regulatory system “must measure, and seek to improve, the actual results of regulatory requirements.”completed plans for how it intends to examine the QM regulations. For instance, a review addressing the purposes of the title might include outcomes such as the effects of the regulations on the overall housing market, cost or availability of credit to borrowers, regulatory burden on industry participants, or protection of consumers from unsustainable mortgage products. The choice of outcomes to be examined plays a key role in the selection of appropriate or relevant data, baselines, and But CFPB has not yet analytical methods. For example, examining the cost and availability of mortgage credit could require different data elements and analysis than examining the effectiveness of the QM regulations in preventing defaults and foreclosures. To date, CFPB has identified several potential data sources it could use to examine the QM regulations. For example, CFPB identified data collected to meet the requirements of the Home Mortgage Disclosure Act (HMDA). HMDA data currently include information about mortgage applications, originations, and loans purchased on the secondary market. However, HMDA currently does not contain information to determine if a loan is QM or non-QM. The Dodd-Frank Act directs CFPB to expand HMDA data reporting requirements. For example, it directs the collection of points and fees information, interest rate spreads, and certain other loan features. CFPB also has proposed to collect additional information (such as borrowers’ debt-to-income ratios and whether the loan meets the QM standard) that could be used to examine the QM regulations. However, the data elements may not be finalized as proposed and may not be available at the time CFPB conducts its analysis (the report on the review must be published no later than Jan. 10, 2019). CFPB had not finalized the HMDA proposal as of April 2015. Once the new HMDA reporting requirements are finalized, CFPB officials said lenders will need time to modify their systems to comply with the new reporting requirements, collect the data, and report the data to CFPB.officials indicated that the earliest the new data might be collected would CFPB be in 2017. Moreover, HMDA data do not include and are not planned to include information about the performance of loans—such as default, delinquency, and foreclosure. According to agency officials, loan performance information would be important to fully examine the effects of the QM regulations. CFPB also entered into a partnership with FHFA to build the National Mortgage Database (NMDB), which will contain loan-level information about the mortgage, borrower, and property for a nationwide sample of 5 percent of borrowers from credit bureau files. information from a credit bureau (such as borrowers’ credit scores and payment history on the mortgage) will be supplemented with data from other sources, such as HMDA and property valuation models, to create a comprehensive profile for each mortgage in the database. FHFA officials said NMDB is planned to include borrower’s debt-to-income ratios, points and fees, interest rate of the loan, and information on loan performance. Although the data used to create NMDB includes personally identifiable information, the database will not contain personally identifiable information, according to CFPB officials. The database is not yet available. FHFA officials anticipated merging the data sources in 2015 and conducting analyses using the database at the end of 2015 or 2016. FHFA officials have noted some concerns about the reliability of some of the data, such as inconsistent definitions used for the debt-to-income ratio at loan origination. Furthermore, many loan records do not contain any information for some data elements, such as debt-to-income ratio. Ultimately, FHFA officials hope to obtain debt-to-income information from HMDA, which they anticipate will be a reliable data source. But, as noted earlier, the expanded HMDA data will not be collected until at least 2017. According to CFPB and FHFA officials, the agencies have worked together to develop the specifications of the database, such as identifying data elements. CFPB has provided financial support to create the database and FHFA has developed the infrastructure and hardware for NMDB. contain data similar to HMDA (such as origination data) and the forthcoming NMDB (such as loan performance information). CFPB suggested that it could use these datasets to conduct analysis similar to the one it conducted when developing the ATR/QM rule. For example, CFPB used data from the two private vendors to estimate the percentage of loans that would have qualified as QMs from 1997 through 2003 and in 2011. However, according to CFPB officials, CoreLogic and BlackBox LLC data do not contain any information on points and fees or reliably contain borrowers’ debt-to-income ratios. CFPB officials said they could estimate the points and fees by deriving them from the stated interest rate and APR of the loan, but cautioned that determining what charges were included in the APR calculation was complex. Any analyses utilizing this approach would need to consider and potentially correct for any bias in the missing data. CFPB officials said they have been collecting qualitative information from various sources to monitor the initial effects of the QM regulations on the residential mortgage market. For example, CFPB officials said they have been tracking industry news, reviewing reports from media outlets, and reviewing reports published by institutions and market participants (such as credit rating agencies and some lenders) and the Federal Reserve. In addition, CFPB officials said they have held informal conversations with lenders at industry events and conferences to obtain their views on the effects of the QM regulations. CFPB officials said this information alone would not be enough to examine the QM regulations, but would inform their approach for examining the regulations. OMB encourages agencies to preplan efforts to retrospectively review their regulations to improve the effectiveness of the reviews. OMB suggests that agencies identify metrics to evaluate regulations, identify baselines for their planned analyses, and ensure they have robust models to conduct their analyses. Furthermore, when promulgating regulations, OMB encourages agencies to give careful consideration about how to promote empirical testing of the effects of the rules during retrospective reviews. We found in a July 2007 report that agencies would be better prepared to undertake reviews if they identified what data and measures would be needed to assess the effectiveness of a rule before they started a review and, indeed, before they promulgated the rule. CFPB officials told us that they had not yet finalized a plan for their retrospective review because they had been focusing first on developing and finalizing the mandated regulations. Congress required CFPB to issue the QM regulations within 18 months of the “designated date” for the transfer of consumer financial protection functions under section 1061 of the Dodd-Frank Act to CFPB from other agencies. CFPB officials told us that a plan to assess the QM regulations is critical. But, as of May 2015, CFPB officials were working to finalize a review plan and officials could not tell us what outcomes they would measure and what data and methodologies they would use to examine the effectiveness of these regulations. Without a plan to assess the QM regulations, CFPB may be limited in its ability to effectively examine the regulations by the mandated deadline. Such a plan will be particularly important because of the uncertainty about the availability and timing of needed data, which may necessitate consideration of alternative analytic strategies and data sources. P.L. 111-203. Sec. 1400(c). See Pub. L. No. 111-203, §1062 for the requirement that the Secretary of the Treasury designate a date for the transfer of responsibility, among others, for promulgating regulations under various federal consumer financial laws to CFPB. See 75 Fed. Reg. 57252 (Sept. 20, 2010) for Treasury’s designation of July 21, 2011, as the transfer date. wide plan for retrospective review of regulatory actions.Lenders can use a borrower’s residual income as one measure of ability to make a mortgage payment. HUD does not maintain key data that it would need to conduct the reviews—such as information on points and fees and interest rate spreads (criteria for determining if a loan is safe harbor or rebuttable presumption) and data needed to calculate residual income. To mitigate the data gaps, HUD officials said they have considered using HMDA and NMDB data. But as we discussed previously, the availability dates of the expanded HMDA and NMDB data—such as information on points and fees—are not known. As of May 2015, the agency also had not identified how it would measure the effects of these regulations, including metrics, baselines, and analytical methods. HUD officials stated that they have not finalized plans for their review of the QM regulations because of the uncertainty about the availability of data resources, such as NMDB. They noted that once the NMDB database was released, they would be able to determine whether it could be used as a resource to monitor and examine QM lending. But, without a plan to identify how to obtain necessary data and identify metrics, baselines and analytical methods, HUD may be limited in its ability to effectively review its regulations and achieve the intended outcomes of its reviews. Agency efforts to assess the QRM regulations included identifying outcomes and potential data sources and methodologies, but have not yet identified specific metrics, baselines, or analytical methods. The six agencies responsible for the QRM regulations—FDIC, FHFA, Federal Reserve, HUD, OCC, and SEC—have committed to commence a review of the QRM definition no later than 4 years after the effective date of the final rule (Dec. 24, 2015, for the QRM-related provisions), and every 5 years thereafter. In the risk-retention final rule, the agencies recognized that mortgage and securitization market conditions and practices change over time, and therefore stated it would be beneficial to review the QRM definition. More specifically, the agencies would consider the structures of securitizations, roles of the various transaction parties, relationships between enterprise and private-label markets, and trends in mortgage products in various markets and structures. They also stated that they would review how the QRM definition affected residential mortgage underwriting and securitization under evolving market conditions. The agencies noted the timing would help ensure the initial review of the QRM definition benefitted from CFPB’s review of the ability-to-repay rules, including the QM definition, and would help the agencies in determining whether the QRM definition should continue to fully align with the QM definition in all aspects. Agency officials said their efforts have included identifying potential data sources for the review of the QRM regulations. For example, they noted that they likely would use mortgage data sources similar to those utilized when developing the QRM regulations, such as loan-level data from the enterprises and information on private-label mortgage-backed securities from a private data vendor. However, agency officials acknowledged that the data available from these sources are missing key information, such as points and fees and borrower’s debt-to-income ratios, needed to determine if a loan is QM or non-QM, and consequently QRM eligible. In addition, agency officials identified HMDA and NMDB as possible data sources. However, the databases currently do not collect information on points and fees and debt-to-income ratios, which may limit their usefulness for examining the QRM regulations. Finally, the agencies have considered using data collected through the Fannie Mae Mortgage Lender Sentiment Survey and the Mortgage Bankers Association’s Mortgage Credit Availability Index to help examine the QRM definition. Although agency officials have identified several data sources, they have not established which data elements to select or how they would be used to assess the QRM regulations. The Mortgage Lender Sentiment Survey is a quarterly online survey among senior executives of Fannie Mae’s leading institution partners. The survey covers industry topics such as credit standards, consumer mortgage demand, and mortgage execution. The monthly Mortgage Credit Availability Index is calculated using a borrower’s credit score, loan type, and loan-to-value ratio, among other factors. The index is a summary measure that indicates the availability of credit at a point in time. Agency officials said their efforts also have included identifying potential methodologies to assess the QRM regulations. For example, they plan to use information collected through their ongoing efforts to monitor broad trends and developments in the residential mortgage market, such as mortgage applications, originations, products, and securitizations as well as loan performance. They also have considered examining loan volumes for QRM and non-QRM loans, as well as QM safe harbor and rebuttable presumption loans. Furthermore, they may conduct a trend analysis by comparing market data before and after the risk-retention rules were effective. Finally, they said that they may look at early payment delinquencies and defaults of newly originated mortgages, as well as different kinds of QM loans, such as those covered under the QM temporary exemption for enterprise loans. Although the agencies identified several retrospective review components—such as outcomes to examine and potential data sources and methodologies—they have not developed a plan that identifies specific metrics and baselines or committed to specific analytical methods. Agency officials stated that they have not developed more specific plans because their ongoing efforts to monitor broad mortgage market trends were sufficient. Additionally, agency officials expected additional information on the housing and mortgage market to be available for their review of the QRM regulations. They explained that the information would be important in determining whether the QRM definition was appropriate under prevailing market conditions. However, the timing, accuracy, and completeness of the data that may be available in time for the agencies to conduct their retrospective reviews (commencing no later than Dec. 24, 2019) are unclear. As we discussed previously, agencies can be better prepared to undertake their reviews and may be able to overcome or mitigate data challenges by identifying specific data sources, metrics, baselines, and analytical methods well before conducting the review, ideally before promulgating the rule. Moreover, although agency officials acknowledged that the review of the QRM regulations necessitates interagency collaboration and plan to collaborate, the agencies have not yet identified specific mechanisms to promote effective collaboration. According to agency officials, the QRM agencies and CFPB held interagency meetings as agreed to during the promulgation of the QM and QRM regulations. The agencies plan to hold interagency meetings to conduct the reviews of the QRM regulations. OMB guidance encourages agencies with overlapping jurisdiction or expertise to determine how the agencies will coordinate to conduct retrospective reviews. In prior reports, we identified key practices to effective agency collaboration, including (1) agreeing on agency roles and responsibilities, (2) defining and articulating a common outcome, (3) establishing mutually reinforcing or joint strategies, and (3) identifying and addressing needs by leveraging resources. Without establishing a framework for collaboration, such as specifying the roles each will play and responsibilities, the agencies involved in the QRM reviews may be limited in their ability to measure the effects of the regulations within the established time frames for their review. In promulgating the QM and QRM regulations, the federal agencies attempted to balance the goals of protecting borrowers and investors from the abuses that contributed to the recent housing crisis with the goal of maintaining access to affordable credit. While the QM and QRM regulations likely will have limited initial effects in the current mortgage market, the long-term implications of the regulations on the mortgage market depend on several factors that are difficult to predict. As such, it will be important for the agencies to conduct retrospective reviews of these regulations. However, federal agencies’ efforts to prepare for examining the QM and QRM regulations have not yet incorporated some important elements of effective reviews as described below. Although CFPB, HUD, and the QRM agencies identified potential data sources (such as HMDA and NMDB), these data sources do not maintain information needed to reliably identify QM and QRM loans. CFPB and FHFA have been taking steps to expand these data sources. However, it is not clear if the expanded data will be available for the initial reviews. HUD has not identified specific metrics, baselines, or analytical methods to conduct its analyses. Although the QRM agencies identified potential analytical methods to conduct their analyses, they have not identified specific metrics and baselines. The six agencies conducting the review of the QRM regulations have not specified mechanisms to promote effective collaboration, such as agreements on agency roles and responsibilities. Finalizing plans to retrospectively review the mortgage regulations and incorporating these key elements will better position the agencies to measure the effects of the regulations and identify any unintended consequences. The agencies also could better understand data limitations and methodological challenges and have sufficient time to develop methods to deal with these limitations and challenges. Furthermore, the QRM agencies could identify opportunities to effectively collaborate and assign duties and responsibilities to help ensure effective use of available resources. We are making the following three recommendations. To enhance the effectiveness of its preparations for conducting a retrospective review of its QM regulations, CFPB should complete its plan. The plan should identify what outcomes CFPB will examine to measure the effects of the regulations and the specific metrics, baselines, and analytical methods to be used. Furthermore, to account for and help mitigate the limitations of existing data and the uncertain availability of enhanced datasets, CFPB should include in its plan alternate metrics, baselines, and analytical methods that could be used if data were to remain unavailable. To enhance the effectiveness of its preparations for conducting a retrospective review of its QM regulations, HUD should develop a plan that identifies the metrics, baselines, and analytical methods to be used. Furthermore, to account for and help mitigate the limitations of existing data and the uncertain availability of enhanced datasets, HUD should include in its plan alternate metrics, baselines, and analytical methods that could be used data were to remain unavailable. To enhance the effectiveness of their preparations for conducting a retrospective review of the QRM regulations, the agencies responsible for the QRM regulations—FDIC, FHFA, Federal Reserve, HUD, OCC, and SEC—should develop a plan that identifies the metrics, baselines, and analytical methods to be used and specify the roles and responsibilities of each agency in the review process. Furthermore, to account for and help mitigate limitations of existing data and the uncertain availability of enhanced datasets, the six agencies should include in their plan alternate metrics, baselines, and analytical methods that could be used if data were to remain unavailable. We requested comments on a draft of this report from CFPB, FDIC, Federal Reserve, FHFA, HUD, OCC, and SEC. We received written comment letters from each of the seven agencies, which are presented in appendixes III through IX. We also received technical comments from the agencies (except OCC) that we incorporated as appropriate. In response to our QM-related recommendations, CFPB concurred and HUD agreed with the draft report recommendations in their comment letters. CFPB stated that it was on track to finish its retrospective review on time. In addition, CFPB provided additional details about the general approach, data, metrics, and analytical methods that were likely to be used in its review. To better recognize these planning steps, we expanded our description of CFPB’s planning efforts, and modified the recommendation to emphasize that CFPB should complete its plan. In response to our QRM-related recommendations, two of the six agencies (HUD and FDIC) stated that they agreed with the recommendations in their comment letters. The other four agencies (the Federal Reserve, FHFA, OCC, and SEC) did not explicitly agree with our recommendations but outlined activities or efforts related to planning for the retrospective review of the QRM definition. For example, the agencies discussed their ongoing data analysis of mortgage market trends and efforts to identify sources for data not currently available, such as debt-to- income ratios and points and fees. Furthermore, SEC identified several potential metrics it could use to examine the QRM definition. For example, SEC expects to examine delinquencies by debt-to-income ratios, among other things. The Federal Reserve noted that it was fulfilling much of our recommendation as part of its regular business operations. However, the agencies did not provide specific time frames for finalizing their approach for the retrospective reviews or how they plan to address uncertainty about the availability of key data needed for the review, such as debt-to-income ratios. For example, the Federal Reserve and SEC stated that their precise analytical approach to reviewing the definition of QRM will depend on data availability and mortgage market conditions. Additionally, all the agencies indicated that they planned to work collaboratively in conducting their retrospective reviews. FHFA and OCC stated they planned to begin preparing for the review after the QRM definition was effective (December 2015). Finally, two letters identified some mechanisms that could promote effective collaboration. For example, FDIC and SEC noted that the agencies intended to divide responsibilities according to agency expertise and resources. But the agencies could and should be doing more to finalize their plans to retrospectively review the mortgage regulations. As we discussed in the draft report, agencies can be better prepared to undertake their reviews and may be able to overcome or mitigate data challenges by identifying specific data sources, metrics, baselines, and analytical methods well before conducting the review, ideally before promulgating the rule. It will be particularly important to have plans that address these elements because of the uncertainty about when and if needed data will be available, which may necessitate consideration of alternative analytic strategies and data sources Incorporating these key elements also will better position the agencies to measure the effects of the regulations and identify any unintended consequences. The comment letters of the agencies involved in the QRM reviews also outlined a general approach to collaboration. However, without establishing a specific framework for collaboration, such as specifying the roles each agency will play and their responsibilities and defining and articulating a common outcome, the agencies involved in the QRM reviews may be limited in their ability to measure the effects of the regulations within the established time frames for their review. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Chairs of FDIC, the Federal Reserve, and SEC; Comptroller of the Currency; Directors of CFPB and FHFA; and the Secretary of HUD; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are listed on the last page of this report. GAO staff who made major contributions to this report are listed in appendix X. This report (1) describes selected trends in the origination and securitization of residential mortgages in 2000–2014; (2) discusses the expected effects of the qualified mortgage (QM) and qualified residential mortgage (QRM) regulations on the residential mortgage market; and (3) examines the extent to which federal agencies have plans in place to monitor and assess the effects of the QM and QRM regulations on the residential mortgage market. To describe trends of residential mortgages from 2000 through 2014, we reviewed a range of mortgage market data generated by federal agencies, mortgage market participants, and observers and identified indicators that may be useful to gauge the effects of QM and QRM regulations. We selected indicators associated with the origination and securitization of residential mortgages (including the volume of originations by certain characteristics, interest rates, foreclosure and default rates, and volume of mortgage-backed security issuances) that are described below: To describe the volume of mortgage originations by certain characteristics, we relied on summary data published by Inside Mortgage Finance and data provided by CoreLogic LLC. For example, we examined Inside Mortgage Finance data describing the volume of originations by loan type—including conventional conforming, Alt-A, subprime, jumbo, and government-insured; type of interest rate (fixed- and adjustable-rate); and loan purpose (purchase and refinance). The Inside Mortgage Finance summary data do not include loans guaranteed by the Department of Agriculture. We did not independently confirm the accuracy of the Inside Mortgage Finance data. To determine the reliability of the data, we reviewed publicly available information on the data source and queried a knowledgeable official about the accuracy of the data. In addition, we examined CoreLogic LLC data describing the volume of loan originations by borrowers’ credit score, debt-to-income ratio, and loan- to-value ratio. The CoreLogic summary data include conventional loans as well as loans insured or guaranteed by the Federal Housing Administration and other federal programs. These data are restricted to first-lien mortgages for the purchase of properties. CoreLogic officials estimated 99 percent of the loans were for single-family residential properties (1-4 units). These data provide wide coverage of the national mortgage market—that is, approximately 85 percent of mortgages, according to CoreLogic officials. Due to the proprietary nature of CoreLogic’s estimates of its market coverage, we could not directly assess the reliability of this estimate. We have used CoreLogic data in prior reports in which we concluded the data were sufficiently reliable for our purposes. Nevertheless, because of limitations in the coverage and completeness of the data, our analysis may not be representative of the mortgage market as a whole. To determine the reliability of the CoreLogic data, we reviewed information on the data source and queried a knowledgeable official about the process CoreLogic used to collect its data and generate the summary data. Although the CoreLogic data have certain limitations— for example, certain data fields are not fully reported—we concluded that the data we used were sufficiently reliable for our purposes. To describe mortgage interest rates, we relied on published data in Freddie Mac’s Primary Mortgage Market Survey. To determine the reliability of these data, we reviewed publicly available information on the data source. We determined the data were sufficiently reliable for our purpose, which was to provide information about how residential mortgage interest rates had changed over the relevant time period. To describe the volume of mortgages in default and foreclosure and recession periods, we relied primarily on a prior GAO report that identified and analyzed key national housing market indicators.used data collected for the prior report and reviewed our prior data reliability assessment. Based on this review, we determined that the data were reliable for our purposes. To update the data and analyses, we relied on several data sources including the National Delinquency Survey data issued by the Mortgage Bankers Association, and data issued by the National Bureau of Economic Research. Generally, we updated our assessments of the reliability of these data by reviewing existing information about data quality and corroborating key information. We determined that the data were sufficiently reliable for our purposes. To describe the volume of mortgage-backed security issuances, we relied on summary data published by Inside Mortgage Finance. We did not independently confirm the accuracy of the data we obtained. However, we reviewed publicly available information on the data source and queried a knowledgeable official about the accuracy of the data. We determined these data were sufficiently reliable for our purposes. To discuss the expected effects of the QM and QRM regulations on the residential mortgage market, we identified and reviewed 24 economic analyses examining the potential effects of these regulations. We identified these analyses through means that included consultation with subject-matter experts (internal and external to GAO), electronic searches of scholarly databases, and reviews of studies conducted by agencies to inform the rulemakings. Generally, the analyses examined the effects the regulations may have on the cost, origination, availability, and securitization of residential mortgages and were performed by federal agencies, academics, industry observers, and industry participants. To review the 24 analyses, we designed a data collection instrument to ensure we collected consistent information from each. To develop the data collection instrument we identified important characteristics for high- quality analyses from sources that included internal GAO guidance on reviewing economic analyses and other federal requirements and best practices for conducting economic reviews during the rulemaking process. GAO staff separately subjected each analysis to a primary and secondary review and independently verified that the collected information was accurate. The staff also used the data collection instrument to identify methodologies and any methodological concerns that may have precluded us from using the economic analyses. We did not exclude any of the economic analyses from our review. The team reviewed the information collected to identify trends across the analyses and identify estimated effects of the regulations. We believe the economic analyses are generally reliable for reporting the range of estimates of the effects of the regulations. We noted instances in which the analyses may have had methodological challenges or data were either missing or unreliable. We discussed any specific concerns about methodology or scope in this report. In addition to the 24 analyses, we reviewed three studies on the initial effects of the QM regulations that were conducted after the rule became effective. (See app. II for a list of the 27 studies we reviewed.) We did not apply our data collection instrument to these studies, but reviewed the findings and the methodologies of these studies. We believe the three studies were sufficiently reliable for the purposes of describing immediate effects of the QM regulations. We also reviewed additional sources that contained information about potential effects of the QM and QRM regulations on the residential mortgage market. For example, we reviewed Federal Register releases and comment letters associated with the promulgation of the QM and QRM regulations. We also interviewed agency officials, stakeholders, and others to obtain their viewpoints about potential effects of these regulations. For example, we interviewed officials from the Consumer Financial Protection Bureau (CFPB), Department of Housing and Urban Development (HUD), Department of the Treasury’s Office of Financial Research, Federal Deposit Insurance Corporation (FDIC), Federal Housing Finance Agency (FHFA), Board of Governors of the Federal Reserve System (Federal Reserve), Financial Stability Oversight Council, Office of the Comptroller of the Currency (OCC), and Securities and Exchange Commission (SEC). Stakeholders and others we interviewed included credit rating agencies; groups representing mortgage lenders, securitizers, and investors; groups representing consumer interests; and academics. We chose these groups and individuals because they had a range of views. To examine the extent to which agencies have plans in place to monitor and assess the effects of the QM and QRM provisions on the residential mortgage market, we identified and reviewed requirements and guidance relating to agencies’ efforts to monitor and assess regulations (criteria). Specifically, we reviewed provisions of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) that requires CFPB to assess its significant rules and publish a report of its assessment. We also identified and reviewed Executive Orders related to agencies’ efforts to conduct retrospective reviews. Moreover, we identified and reviewed Office of Management and Budget (OMB) memorandums associated with these Executive Orders. Finally, we reviewed prior GAO reports that examined agencies efforts to conduct retrospective reviews of regulations. To examine efforts and plans to monitor and assess the effects of the QM and QRM regulations, we focused our review on the retrospective review activities of CFPB, HUD, and the six agencies responsible for the QRM regulations—FDIC, FHFA, Federal Reserve, HUD, OCC, and SEC. We did not evaluate the efforts of the Departments of Agriculture and Veterans Affairs to review their QM regulations because they had not promulgated their own rules when we began our analysis and because their programs represent a smaller portion of the residential mortgage market. To understand federal agencies’ efforts and plans to monitor and assess the effects of the QM and QRM regulations, we reviewed Federal Register releases and other agency documents pertaining to retrospective reviews. For example, we identified and reviewed agency publications that contained plans to conduct retrospective reviews of the QM and QRM regulations, such as CFPB’s 2013 and 2014 strategic plans, as well as HUD’s final QM regulations, its 2014 and 2015 retrospective review plan, and its 2014-2018 strategic plan. During our review, we also examined CFPB’s efforts to expand reporting requirements for Home Mortgage Disclosure Act (HMDA) data and examined the extent to which the expanded reporting might include data useful to monitor and assess the QM and QRM regulations. For example, we reviewed CFPB’s 2014 proposed rule to expand HMDA reporting. Similarly, we reviewed FHFA’s efforts to develop a National Mortgage Database and the extent to which it may include data to monitor and assess the QM and QRM regulations. We also interviewed federal agency officials (from CFPB, FDIC, FHFA, Federal Reserve, HUD, OCC, and SEC) about their plans to conduct retrospective reviews of the QM and QRM regulations. We conducted this performance audit from November 2013 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. American Bankers Association. 21st Annual ABA Real Estate Lending Survey Report. Washington, D.C.: 2014. Ashworth, Roger, Laurie Goodman, Brian Landy, and Lidan Yang. “The Coming Crisis in Credit Availability.” Amherst Mortgage Insight. May 30, 2012. Bai, Bing. Board of Governors of the Federal Reserve System. July 2014 Senior Loan Officer Opinion Survey on Bank Lending Practices. Washington, D.C.: August 4, 2014. Board of Governors of the Federal Reserve System. Changes in U.S. Family Finances from 2007 to 2010: Evidence from the Survey of Consumer Finances. Washington, D.C.: June 2012.Board of Governors of the Federal Reserve System. Report to the Congress on Risk Retention. Washington, D.C.: October 2010. Bureau of Consumer Finance Protection. Ability-to-Repay and Qualified Mortgage Standards under the Truth in Lending Act. Washington, D.C.: January 2013. Coalition for Sensible Housing Policy. Updated QRM Proposal Strikes Balance: Preserves Access While Safeguarding Consumers and the Markets. Washington, D.C.: October 24, 2013. DBRS. Assessing U.S. RMBS Pools under the Ability-to-Repay Rules. New York, N.Y.: May 2014. Department of Housing and Urban Development. Economic Analysis Statement: Qualified Mortgage Definition for Insured or Guaranteed Single Family Mortgages. Washington, D.C.: November 2013. Department of Housing and Urban Development. Economic Analysis Statement: Qualified Mortgage Definition for Insured or Guaranteed Single Family Mortgages. Washington, D.C.: (undated). Federal Housing Finance Agency. “Qualified Residential Mortgages.” Mortgage Market Note, 11-02. Washington, D.C.: April 11, 2011. Fitch Ratings. U.S. RMBS Qualified and Nonqualified Mortgage Criteria. New York, N.Y.: March 2014. Goldman Sachs. “Assessing the Impact of QM.” The Mortgage Analyst. New York, N.Y.: October 10, 2013. Moody’s Analytics. Moody’s Approach to Assessing Incremental Risk Posed by the Ability-to-Repay Rules in U.S. RMBS. New York, N.Y.: March 25, 2014. Moody’s Analytics. Cost of Housing Finance Reform. New York, N.Y.: November 2013. Mortgage Bankers Association. MBA Comment on Reopened Comment Period for QM Rule. Washington, D.C.: July 9, 2012. Office of Financial Research. 2013 Annual Report. Washington, D.C.: December 2013. Quercia, Roberto G., Lei Ding, and Carolina Reid. Balancing Risk and Access: Underwriting Standards and Qualified Residential Mortgages. Chapel Hill, N.C.: Center for Community Capital at the University of North Carolina, and Center for Responsible Lending, January 2012. Reid, Carolina, and Roberto G. Quercia. Risk, Access, and the QRM Reproposal. Chapel, Hill, N.C.: Center for Community Capital at the University of North Carolina, September 2013. Schwartz, Faith A., Margarita S. Brose, and Stuart I. Quinn. QRM and Risk Retention Standards: Foundations for a Sound Housing Market. Irvine, Calif.: Corelogic, December 2013. Schwartz, Faith A. and Margarita S. Brose. ATR/QM Standards: Foundations for a Sound Housing Market. New York, N.Y.: CoreLogic, February 12, 2013. Seidman, Ellen, Jun Zhu, and Laurie Goodman. “QRM, Alternative QRM: Loan Default Rates.” Urban Wire: Housing and Housing Finance. Washington, D.C.: Urban Institute, October 17, 2013. Seidman, Ellen, Jun Zhu, and Laurie Goodman. “QRM vs. Alternative QRM: Quantifying the Comparison.” Urban Wire: Housing and Housing Finance. Washington, D.C.: Urban Institute, October 7, 2013. Standard and Poor’s. Methodology and Assumptions for Adjusting RMBS Loss Severity Calculations for Loans Covered under Ability-to-Repay and Qualified Mortgage Standards. New York, N.Y.: January 23, 2014. The Center for Responsible Lending, Consumer Federation of America, and The Leadership Council on Civil and Human Rights. Request for Comment on Qualified Mortgage. Durham, N.C.: July 9, 2012. White, Joshua, and Scott Bauguess. Qualified Residential Mortgage: Background Data Analysis on Credit Risk Retention. Washington, D.C.: Securities and Exchange Commission (Division of Economic and Risk Analysis), August 2013. In addition to the individual named above, Harry Medina (Assistant Director), Anne A. Akin (Analyst-in-Charge), Emily Chalmers, Tisha D. Derricotte, Donald P. Hirasuna, Robert D. Lowthian, John McGrail, Barbara Roesmann, Jena Sinkfield, Andrew Stavisky, and Jim Vitarello made major contributions to this report. Timothy Bober and Janet Eackloff also contributed to this report. | Amid concerns that risky mortgage products and poor underwriting standards contributed to the recent housing crisis, Congress included mortgage reform provisions (QM and QRM) in the Dodd-Frank Wall Street Reform and Consumer Protection Act. CFPB's regulations establishing standards for QM loans became effective in January 2014. More recently, six agencies jointly issued the final QRM rule that will become effective in December 2015. GAO was asked to review possible effects of these regulations. This report (1) discusses views on the expected effects of the QM and QRM regulations, and (2) examines the extent of agency planning for reviewing the regulations' effects, among its objectives. GAO's methodologies included identifying and reviewing academic, industry, and federal agency analyses on the expected effects of the regulations. GAO also reviewed federal guidance on retrospective reviews and interviewed agency officials to assess agency efforts to examine the effects of the QM and QRM regulations. Federal agency officials, market participants, and observers estimated that the qualified mortgage (QM) and qualified residential mortgage (QRM) regulations would have limited initial effects because most loans originated in recent years largely conformed with QM criteria. The QM regulations, which address lenders' responsibilities to determine a borrower's ability to repay a loan, set forth standards that include prohibitions on risky loan features (such as interest-only or balloon payments) and limits on points and fees. Lenders that originate QM loans receive certain liability protections. Securities collateralized exclusively by residential mortgages that are “qualified residential mortgages” are exempt from risk-retention requirements. The QRM regulations align the QRM definition with QM; thus, securities collateralized solely by QM loans are not subject to risk-retention requirements. The analyses GAO reviewed estimated limited effects on the availability of mortgages for most borrowers and that any cost increases (for borrowers, lenders, and investors) would mostly stem from litigation and compliance issues. According to agency officials and observers, the QRM regulations were unlikely to have a significant initial effect on the availability or securitization of mortgages in the current market, largely because the majority of loans originated were expected to be QM loans. However, questions remain about the size and viability of the secondary market for non-QRM-backed securities. Agencies have begun planning their reviews of the QM and QRM regulations (due January and commencing December 2019, respectively); however, these efforts have not included elements important for conducting effective retrospective reviews. Federal guidance encourages agencies to preplan their retrospective reviews and carefully consider how best to promote empirical testing of the effects of rules. To varying degrees, the relevant agencies have identified outcomes to examine, potential data sources, and analytical methods. But existing data lack important information relevant to the regulations (such as loan performance or borrower debt to income) and planned data enhancements may not be available before agencies start the reviews. The Bureau of Consumer Financial Protection (CFPB) has proposed expanding Home Mortgage Disclosure Act data reporting requirements, but the earliest that the enhanced data will be available is 2017. Similarly, the Department of Housing and Urban Development (HUD) identified how it intends to examine its QM regulations and some potential data sources but has yet to determine how it would measure the effects of these regulations, including metrics, baselines, and analytical methods. Agencies also have not specified how they will conduct their reviews, including determining which data and analytical methods to use. Finalizing plans to retrospectively review the mortgage regulations will position the agencies to better measure the effects of the QM and QRM regulations and identify any unintended consequences. Additionally, the agencies could better understand data limitations and methodological challenges and have sufficient time to develop methods to deal with these limitations and challenges. CFPB, HUD, and the six agencies responsible for the QRM regulations should complete plans to review the QM and QRM regulations, including identifying specific metrics, baselines, and analytical methods. CFPB, HUD, and one QRM agency—the Federal Deposit Insurance Corporation—concurred or agreed with the recommendations. The other QRM agencies did not explicitly agree with the recommendations, but outlined ongoing efforts to plan their reviews. |
On May 4, 2000, the National Park Service initiated a prescribed burn on federal land at Bandelier National Monument, New Mexico, in an effort to reduce the threat of wildfires in the area. The plan was to burn up to 900 acres. On May 5, 2000, the prescribed burn exceeded the capabilities of the National Park Service, spread to other federal and nonfederal land, and was characterized as a wildfire. By May 7, 2000, the fire had grown in size and caused evacuations in and around Los Alamos, New Mexico. On May 13, 2000, the President issued a major disaster declaration, and subsequent to it, the Secretary of the Interior and the National Park Service assumed responsibility for the fire and the subsequent loss of federal, state, local, tribal, and private property. The fire, known as the Cerro Grande Fire, burned approximately 48,000 acres in four counties and two Indian pueblos, destroyed over 200 residential structures, and forced the evacuation of more than 18,000 residents. On July 13, 2000, the President signed the CGFAA into law. Under the CGFAA, each claimant is entitled to be compensated by the United States government for certain injuries and damages that resulted from the Cerro Grande fire. The Congress appropriated $455 million to the FEMA for the payment of such claims and $45 million for the administration of the Cerro Grande program. The act requires that GAO conduct annual audits on the payment of all claims made and annually report the results of the audits to the Congress by July 13, beginning in fiscal year 2001. The act also requires that our report include a review of all subrogation claims for which insurance companies have been paid or are seeking payment as subrogees under this act. FEMA is also required to annually submit a report to Congress that provides information about claims submitted under the act. This report is to include the amounts claimed, a description of the nature of the claims, and a status or disposition of the claims, including the amounts paid. FEMA’s first report is to be issued by August 28, 2001, based on the issuance of the program rules as discussed below. The CGFAA required FEMA to promulgate and publish implementing regulations for the Cerro Grande program within 45 days of enactment of the law. On August 28, 2000, FEMA published the Disaster Assistance: Cerro Grande Fire Assistance; Interim Final Rule in the Federal Register (Interim Final Rules). FEMA followed the Interim Rule with a set of implementing policies and procedures on November 13, 2000. FEMA updated these policies and procedures in January and March 2001. After reviewing public comments on the interim rule, FEMA finalized and published The Disaster Assistance: Cerro Grande Fire Assistance; Final Rule (Final Rule) on March 21, 2001. In performing our review, we considered the Standards for Internal Control in the Federal Government. To gain an understanding of the claim review and payment process established by the OCGFC, we interviewed FEMA and OCGFC officials, General Adjusters Bureau (GAB) Robins officials, and staff from FEMA’s Office of Inspector General. We also reviewed the requirements of the CGFAA, the interim and final regulations published in the Federal Register, OCGFC’s policies and procedures manual, an actuarial report, FEMA’s fiscal year 2000 audited financial statements and the current year Cerro Grande trial balance and other documentation concerning the Cerro Grande program. We also obtained, reviewed, and considered the results of numerous desk reviews completed by FEMA’s Office of Inspector General. Finally, we selected three separate random probability samples from the population of claim payments to determine whether policies and procedures in place were being followed and to ensure that they provided adequate internal controls over appropriated federal funds. We did not assess the reasonableness of individual payments made. Our first sample of 59 partial claim payments was drawn from a population of 1,195 partial claim payments made through November 24, 2000, that was processed under the August 28, 2000, Interim Rules. The second random probability sample consisted of 59 items drawn from a population of 488 partial payments made between November 25 and December 28, 2000. These payments were processed using the policies and procedures adopted by the OCGFC on November 13, 2000. For these two samples, we reviewed the claim files to determine if all the required forms and key signatures required to process a partial payment had been obtained. Our third sample consisted of 63 final claim payments selected using a stratified sampling approach from a population of 255 final payments made through March 23, 2001. Fifty-eight of these 63 items were randomly selected from the population of claim payments in which the amount paid was less than $40,000. The remaining five items comprised all claim payments greater than or equal to $40,000. For these 63 items, we reviewed the claim files not only to see if all the required forms and signatures existed but to also look for evidence that victims’ claims had been investigated to determine their validity and that the payment amounts were adequately supported and reasonable. As mentioned previously, we were not able to audit subrogated claim payments because, as of June 2001, no subrogated claims had been processed or paid. We did, however, inquire and obtain information about the number and dollar amount of subrogated claims submitted to the OCGFC through June 2001. We also inquired about the status of the policies and procedures being drafted by the OCGFC to process these claims. Our work was conducted in Sante Fe, New Mexico, and Washington, D.C., from December 2000 through June 2001 in accordance with generally accepted government auditing standards. We requested agency comments on a draft of this report from the Director of FEMA. FEMA provided certain technical comments orally, which we have incorporated as appropriate. FEMA’s Assistant Director of the Readiness, Response and Recovery Directorate also provided written comments in response to our draft on behalf of FEMA and OCGFC, which are reproduced in appendix I. We evaluated the written comments in the “Agency Comments and Our Evaluation” section of this report. The OCGFC has established and generally followed a systematic process for the payment of claims resulting from the Cerro Grande fire. However, this process, as illustrated in figure 1 and described below, needs to be strengthened. We found that certain key procedures used by the claims reviewers were not formally documented and actions taken by the claim reviewers to verify claimant-provided information and determine claim reimbursements were typically not documented. We also noted that OCGFC is in the process of developing policies and procedures for how certain claims to be paid in the coming months will be processed and paid. The Standards for Internal Control in the Federal Government specifies that internal control and all transactions and other significant events need to be clearly documented and be readily available for examination. Control activities should be documented in management directives, administrative policies, or operating manuals which are properly maintained. Based on our test work and discussions with OCGFC officials, we determined that certain payment determinations were based on policies and procedures that were not formally documented. OCGFC officials told us that certain policies have only been documented in e-mails or in notes from staff meetings. This can result in inconsistent determinations of claim amounts and raised questions regarding the basis for certain claim determinations. For example, we identified inconsistencies in the calculation of lost wages for certain individuals. In one case, the reimbursed amount was determined based on the claimant’s gross wages whereas in another case, the claimant was reimbursed based upon net wages. OCGFC officials stated that as part of renegotiating its contract with its claims adjusting firm, it will require that all policies and procedures be identified, formally documented, and updated monthly. In addition to the lack of formally documented procedures, there generally was insufficient documentation in the claim files to enable us to determine what steps, if any, the claims reviewers hired under contract by OCGFC had taken to verify certain key data provided by the claimants or to determine the reasonableness of amounts claimed. This was the case for 43 of the 63 final claim payments we tested. When projected to the universe of final payments made as of March 23, 2001, we can conclude with 95 percent confidence that between 57 and 78 percent of the final payments had similar documentation deficiencies. For the other 20 cases where we determined the documentation was sufficient, the payments usually involved victims’ claims for reimbursement of insurance deductibles and/or flood insurance premiums. In these cases, the claim payments were evidenced by documentation provided by the insurer. The following examples illustrate the types of documentation deficiencies we observed during our review of the case files. In numerous cases we examined, the fire forced victims to evacuate their homes. OCGFC compensated these victims for the “loss of use” of their homes based upon a square footage methodology. In these cases, we found in the claim files the calculations to determine the amount of compensation. What we did not typically find was evidence to substantiate the square footage of the home used in the calculations. We did not see evidence that the claims adjusters had visited the property to obtain measurements or that they had obtained or considered other documentation, such as property records or insurance policies, that would substantiate the square footage. For cases where personal property losses occurred, we commonly found lists or spreadsheets prepared by the claimants listing the property destroyed by the fire as well as the claimants’ estimates of the costs of replacing these items. What was commonly lacking, however, was documentation of the steps the claims reviewers took to verify that the victims owned the items claimed to have been lost in the fire or to assess the reasonableness of the replacement costs. Most often, the amount paid for personal property was simply the total listed on the spreadsheet with no evidence that any of the items or amounts were reviewed or substantiated. Officials we spoke with from FEMA’s Flood Insurance Administration told us that its claims reviewers would routinely take steps to verify and document that the information provided by the claimant was valid and that the costs were reasonable before recommending a payment. Such steps may include confirmation of purchases with a vendor or store, viewing photographs taken prior to the fire, or performing reasonableness tests, such as determining if items claimed are typical household items. While OCGFC’s policy manual and its contract with GAB Robins do require an investigation of the claim, there was frequently no evidence that such an investigation was performed. Only in a limited number of circumstances were we able to tell based on documentation in the claims file that a claims reviewer had investigated particular items to establish that a claimant did in fact own an item or that the amount requested to replace the item was reasonable. For example, in one case, the claims reviewer’s notes showed that he questioned and ultimately reduced the amount claimed for an autographed poster from $1000 to $5.50 after researching the poster’s value on the internet. In another example of documentation deficiencies, a claims reviewer recommended that certain medical expenses, although evidenced by third party receipts and a physician’s letter, not be paid because the reviewer determined that the claimant had a pre-existing condition. Based on our discussions with OGCFC officials, we were told that this recommendation was forwarded to an OCGFC authorizing official who reversed the claims reviewer’s decision because the OCGFC policies provide for such a payment under certain circumstances. However, the claim file contained no explanation as to why the initial recommendation of the claim reviewer was reversed. Key decisions such as this should be clearly documented in the claim files to facilitate the supervisory review process and establish an adequate audit trail as required by the Standards for Internal Control in the Federal Government. OCGFC officials advised us that the GAB Robins claims reviewers use an automated claims information system (ACIS) to document their interactions with the claimants and to document certain aspects of their claim investigation work. Such an automated system could help mitigate some of the documentation deficiencies we have discussed in this report. We requested that OCGFC provide us with the available ACIS information for specific cases in our sample. We reviewed the ACIS reports and found that the reports documented GAB Robins’ contacts with the fire victims but provided little or no information about what steps had been taken to verify the validity or reasonableness of the claim payments for these particular cases. In addition, the OIG reviews of OCGFC claim payments, like our work, raised a number of questions regarding the adequacy of documentation contained in the case files and also questioned whether in certain circumstances OCGFC had established written policies on how to handle certain claims. An OIG official told us that he has not found ACIS to be helpful in resolving the documentation issues he has identified. Without sufficient documentation, claims supervisors and authorizing officials can not properly review the work of the claims reviewers, and the risk of improper payments is increased. In addition, as stated above, the lack of documentation in the claim files precluded us from determining what steps the claims reviewers had taken to determine the validity and reasonableness of most of the claim payments we tested. As of June 2001, OCGFC was in the process of developing certain key policies and procedures for the payment of various claim types. Established policies and procedures that are both documented and communicated throughout an organization are a key component of an effective system of internal control. As previously discussed, OCGFC is still developing policies and procedures for paying subrogated claims. In addition, OCGFC has deferred formulation of policies on how to compensate property owners for unrealized declines in their property values until the Los Alamos real estate market is further analyzed. Under the CGFAA, fire victims have until August 28, 2002, to file Notices of Loss for injuries resulting from the fire. OCGFC officials acknowledged that finalizing all necessary policies and procedures is important and stated that they have and will continue to approach policy development in a prioritized manner. For instance, since the CGFAA specified subrogated claim payments were not to occur until after the payment of other claims, OCGFC placed a lower priority on the development of these policies and procedures. OCGFC officials stressed that no claims have been paid without first having put policies and procedures in place and that so far all claims have been paid within 180 days after the receipt of a Notice of Loss as specified in the CGFAA. As previously discussed, the CGFAA appropriated $455 million to compensate victims for losses resulting from the Cerro Grande fire. In its fiscal year 2000 audited financial statements, FEMA recognized an estimated claim liability of $437 million. This amount represents the known probable and estimable losses that were unpaid as of September 30, 2000, and is based on the August 28, 2000, Interim Final Rules published in the Federal Register. In addition, FEMA reported that there is a reasonable possibility that additional liabilities may have been incurred. However, these amounts could not yet be estimated because the potential claims were unknown or had not been defined under the CGFAA “Interim Final Rules.” FEMA’s estimated claim liability is based largely on a December 5, 2000, independent actuary’s report. This report has not been updated to reflect the policy changes contained in the final program rules published in March 2001 or to reflect new policies implemented since December. FEMA officials told us that because this report is costly to prepare, they only intend to update the study annually for financial statement purposes. Based on our analysis of this actuarial report, we believe there is a possibility that additional funding may be required to satisfy the claims likely to arise as a result of the fire. The actuary’s study did not provide estimates for certain categories of losses that could be significant. For example, no estimates of loss were included for business losses outside of the immediate fire area or for damaged or lost sacred Pueblo (tribal) lands. At least one Pueblo has filed a Notice of Loss where there is a potential claim for damages to unique cultural and religious sites as well as environmental damages to the Pueblo grounds. The Notice of Loss also mentions potential claims for loss of range productivity and agricultural lands, cultural plants, big game, and archeological and cultural sites. In addition, to date, no estimates for devaluation of residential and commercial real estate and Pueblo lands as a result of the fire have been made. OCGFC contracted with an independent public accounting firm to analyze the residential real estate in the County of Los Alamos. The purpose of this analysis was to assess whether the value of residential property that was not physically damaged by the fire declined as a result of the fire, and if so, which communities and types of housing were most effected. The results of this study were released on March 28, 2001, and concluded that single-family residences in Los Alamos County sold between May 10, 2000, and January 31, 2001, experienced an average diminution in value in the range of 3 to 11 percent. In addition, the study concluded that other types of housing, including quads, duplexes, condos, and townhouses in the eastern area of the City of Los Alamos, also appear to have lost value. As of March 28, 2001, there have been approximately 25 claims for realized and unrealized property value diminution filed by Los Alamos residents. On April 2, 2001, OCGFC issued policy guidance on how it intends to compensate claimants who have realized losses. The policy states that unrealized loss claims will not be addressed until an additional follow-up analysis of the residential real estate market in Los Alamos is completed during the second quarter of 2002. Until this study is completed and policies are developed that address compensation for unrealized losses, uncertainties about the potential cost will continue to exist. Also, in the CGFAA Final Rules published in the Federal Register on March 21, 2001, FEMA increased the amount of allowable compensation for miscellaneous and incidental expenses incurred in the claims process. These payments are made after FEMA has obtained a properly executed Release and Certification form from the claimant. Under the interim rule that was in effect when the original actuarial report was issued, claimants (most individuals and businesses) were reimbursed for 1 percent of their insured and uninsured losses (excluding flood insurance premiums) with a minimum payment of $100 and a maximum payment of $3,000. Under the final rule, claimants now receive a payment equal to 5 percent of their insured and uninsured losses subject to a $100 minimum and a $15,000 maximum payment. This policy change will increase the total amount paid under the CGFAA and increases the likelihood that additional funding may be needed. Finally, OCGFC officials stated that the volume of claims received has been greater than originally anticipated. As of June 20, 2001, OCGFC officials told us that approximately 14,000 claims had been submitted. Initial estimates we were able to obtain only anticipated 1,200 to 1,500 claims being submitted. Therefore, the volume of claims submitted through mid-June of this year is approximately 10 times more than the initial estimates. This further calls into question the adequacy of the $455 million appropriated to pay victims’ claims. While the federal government has accepted responsibility for the Cerro Grande fire and enacted legislation to expeditiously compensate those injured by the fire, it is incumbent on FEMA as the administering agency to establish an effective system of internal control to safeguard the funds appropriated for the Cerro Grande program. The CGFAA lays a framework to establish such accountability by requiring FEMA to determine that victims’ injuries and losses occurred as a result of the fire and to determine the amount of allowed compensation. In addition, the act requires that we conduct annual audits of all claim payments. FEMA has established a process to review all claims submitted. However, this process as currently implemented does not provide adequate assurance that only valid claims were paid or that the amounts paid were reasonable because there is insufficient documentation of the steps taken to determine the validity and reasonableness of the claim amounts. This lack of documentation precluded us from determining what steps or actions the claim reviewers took to determine the validity and reasonableness of the claims we attempted to review but more importantly, may limit or prevent FEMA officials responsible for approving the payments from obtaining assurance that the payments are proper. In addition, certain OCGFC policies and procedures for paying claims have either not yet been developed or have not been formally and centrally documented. Beyond identifying deficiencies in FEMA’s claims process, our work raised questions about whether the $455 million appropriation to pay victims’ claims will be sufficient. The current estimate of the government’s liability ($437 million) does not include estimates for all claim types and was determined prior to the implementation of the final rules for compensating fire victims. FEMA has not re-estimated the government’s liability in light of significant changes that have occurred since December 2000. In order to strengthen the claim review and approval process, we recommend that the Director of FEMA direct the OCGFC to take the following actions: Require claims reviewers to document all steps and procedures they perform to determine the validity of a claim and the amount recommended for payment. Review and consolidate all existing informal guidance and incorporate this guidance into a set of formally documented policies and procedures that are regularly updated and distributed to all staff responsible for the claims review and award determination process. Establish standardized policies and procedures to address claims for which no policy currently exists as expediently as possible. Based on the information currently available, re-estimate the remaining claims to determine if there is sufficient funding available to fulfill the objectives of the CGFAA. In the future, update the estimate as necessary to reflect new claims information or changes in key policies and procedures. FEMA, in a letter from its Assistant Director of the Readiness, Response and Recovery Directorate, stated that FEMA and the Office of Cerro Grande Fire Claims (OCGFC) were pleased that this report recognized that a systematic process for paying fire victim’s injury claims in accordance with the Cerro Grande Fire Assistance Act (CGFAA) was established and generally followed. However, FEMA expressed its concern that our report did not provide a sufficiently balanced view of its efforts to develop and implement the program since its inception 9 months ago and provided comments on areas of the report where FEMA and OCGFC took issue with our findings and comments on its progress. FEMA did not specifically comment on our recommendations. FEMA’s comments are reproduced in appendix I and discussed in more detail below. FEMA expressed the need for further clarification on several of our comments and findings on the (1) level of documentation contained in claim files and (2) policies and procedures/methodologies used for calculating losses. FEMA stated that it believes guidance and procedures necessary to support claims determination are present and are in use. Our evaluation of FEMA’s comments follows. We amplified the discussion on several topics in our report in response to these comments. With regard to our finding that OCGFC’s current policies do not cover all needed procedures and do not require sufficient evidentiary documents, FEMA stated that consistent with the resources provided, OCGFC has responded timely and energetically to emerging policy and procedure needs for a first-time program. FEMA also stated that claim reviewers have responded to their direction to improve the claim file documentation. While we recognize the unique nature of this program, the necessity for documentation that proper policies and procedures have been carried out still exists. The documentation contained in the claim files during the period of our audit, did not provide us with a basis for determining what steps, if any, the claims reviewers had taken to determine the validity and reasonableness of most claim payments we attempted to audit. Further and more importantly, because of the condition of the files, FEMA officials cannot effectively carry out their responsibilities for assessing the contractor’s work to determine the validity and reasonableness of the amounts claimed. As a result, inconsistent claims determinations can occur and there is no assurance that the amounts paid are proper. FEMA further stated that the key separation of duties and multiple layers of review in its process (reviewer, supervisor, and authorizing official) constitute substantial documentary evidence of the validation of an individual claim. The main point of our report is that sufficient documentary support is either not obtained or not written down to evidence the specific procedures performed by the contracted claims reviewers. Therefore, the effectiveness of the supervisory review process, even though it consists of multiple layers of review, is diminished and does not provide reasonable assurances that the claim payment determinations were proper. Better documentation is needed so that OCGFC officials are able to properly oversee the work of the contractors and to make a fully informed decision concerning approval of a claimed amount for payment. FEMA stated that the legislative history of the CGFAA provides that “the regulation should not be overly burdensome for the claimants and should provide an understandable and straightforward path to settlement.” In this regard, our recommendations in no way put any additional burden on the victims of the fire, but rather are directed toward obtaining reasonable assurance that the procedures performed by OCGFC and its contractors during the claim review and payment process are properly documented and provide a reasonable basis for payment decisions under the circumstances. Regarding our example of insufficient documentation of square footage, FEMA stated that GAB Robins claims reviewers followed industry standards and that OCGFC required complete documentation of steps taken by claim reviewers during verification of square footage and personal property losses. While we were told that the claim reviewers performed various procedures to determine the validity of the claimed amounts, we typically found that there was no evidence of these steps in the case files. Therefore, there is no basis for any after-the-fact determinations on whether enough work was done to access the reasonableness of the claims for square footage or other types of losses. In commenting on our observation regarding the development of key policies and procedures, FEMA stated that OCGFC policies that were in effect during our review were detailed and extensive, and were intended to help first those whose homes had been lost or damaged by fire, those who suffered interruption of their businesses, and those who were evacuated in response to the fire. They further stated that other policies were developed when the need arose and only after consulting with several other U.S. government agencies to ensure consistency with other policies and positions taken in lawsuits. We revised our report to make it clear that we do not take issue with FEMA’s development of policies in a prioritized manner. However, we continue to believe that FEMA should work expediently to develop and finalize all remaining policies so that victims know what losses are eligible for compensation and how their compensation will be determined well in advance of the August 28, 2002, deadline for filing Notices of Loss. We are sending copies of this report to the congressional committees and subcommittees responsible for FEMA related issues; the Director of the Federal Emergency Management Agency; Director of the Office of Cerro Grande Fire Claims; and the Inspector General of the Federal Emergency Management Agency. Copies will also be made available to others upon request. If you have questions about this report, please contact me at (202) 512- 9508 or Steven Haughton, Assistant Director, at (202) 512-5999. Other key contributors to this assignment were Julia Duquette, Phillip McIntyre, and Christine Fant. | While the federal government has accepted responsibility for the Cerro Grande fire and enacted the Cerro Grande Fire Assistance Act (CGFAA) to expeditiously compensate those injured by the fire, it is incumbent on the Federal Emergency Management Agency (FEMA) as the administering agency to establish an effective system of internal control to safeguard the funds appropriated for the Cerro Grande program. The act lays a framework to establish such accountability by requiring FEMA to determine that victims' injuries and losses occurred as result of the fire and to determine the amount of allowed compensation. FEMA has established a process to review all claims submitted. However, this process as currently implemented does not provide adequate assurance that only valid claims were paid or that the amounts paid were reasonable because there is insufficient documentation of the steps taken to determine the validity and reasonableness of the claim amounts. In addition, policies and procedures for paying claims have either not yet been developed or have not been formally and centrally documented. |
Port security overall has improved because of the development of organizations and programs such as AMSCs, Area Maritime Security Plans (area plans), maritime security exercises, and the International Port Security Program, but challenges to successful implementation of these efforts remain. Additionally, agencies may face challenges addressing the additional requirements directed by the SAFE Port Act, such as a provision that DHS establish interagency operational centers at all high- risk priority ports. AMSCs and the Coast Guard’s sector command centers have improved information sharing, but the types and ways information is shared varies. Area plans, limited to security incidents, could benefit from unified planning to include an all-hazards approach. Maritime security exercises would benefit from timely and complete after action reports, increased collaboration across federal agencies, and broader port level coordination. The Coast Guard’s International Port Security Program is currently evaluating the antiterrorism measures maintained at foreign seaports. Two main types of forums have developed for agencies to coordinate and share information about port security: area committees and Coast Guard sector command centers. AMSCs serve as a forum for port stakeholders, facilitating the dissemination of information through regularly scheduled meetings, issuance of electronic bulletins, and sharing key documents. MTSA provided the Coast Guard with the authority to create AMSCs— composed of federal, state, local, and industry members—that help to develop the area plan for the port. As of August 2007, the Coast Guard had organized 46 AMSCs. As part of an ongoing effort to improve its awareness of the maritime domain, the Coast Guard developed 35 sector command centers, four of which operate in partnership with the U.S. Navy. Each has flexibility to assemble and operate in a way that reflects the needs of its port area, resulting in variations in the number of participants, the types of state and local organizations involved, and the way in which information is shared. Some examples of information shared includes assessments of vulnerabilities at specific port locations, information about potential threats or suspicious activities, and Coast Guard strategies intended for use in protecting key infrastructure. We have previously reported that both of these types of forums have helped foster cooperation and information-sharing. We further reported that AMSCs provided a structure to improve the timeliness, completeness, and usefulness of information sharing between federal and nonfederal stakeholders. These committees improved upon previous information- sharing efforts because they established a formal structure and new procedures for sharing information. In contrast to AMSCs, the Coast Guard’s sector command centers can provide continuous information about maritime activities and involve various agencies directly in operational decisions using this information. We have reported that these centers have improved information sharing, and the types of information and the way information is shared varies at these centers depending on their purpose and mission, leadership and organization, membership, technology, and resources. The SAFE Port Act called for establishment of interagency operational centers, directing the Secretary of DHS to establish such centers at all high-priority ports no later than 3 years after the Act’s enactment. The act required that the centers include a wide range of agencies and stakeholders and carry out specified maritime security functions. In addition to authorizing the appropriation of funds and requiring DHS to provide the Congress a proposed budget and cost-sharing analysis for establishing the centers, the act directed the new interagency operational centers to utilize the same compositional and operational characteristics of existing sector command centers. According to the Coast Guard, none of the 35 centers meets the requirements set forth in the SAFE Port Act. Nevertheless, the four centers the Coast Guard operates in partnership with the Navy are a significant step in meeting these requirements, according to a senior Coast Guard official. The Coast Guard is currently piloting various aspects of future interagency operational centers at existing centers and is also working with multiple interagency partners to further develop this project. DHS has submitted the required budget and cost-sharing analysis proposal, which outlines a 5-year plan for upgrading its centers into future interagency operations centers to continue to foster information sharing and coordination in the maritime domain. The Coast Guard estimates the total acquisition cost of upgrading 24 sectors that encompass the nation’s high priority ports into interagency operations centers will be approximately $260 million, to include investments in information system, sensor network, facilities upgrades and expansions. According to the Coast Guard, future interagency operations centers will allow the Coast Guard and its partners to use port surveillance with joined tactical and intelligence information, and share this data with port partners working side by side in expanded facilities. In our April 2007 testimony, we reported on various challenges the Coast Guard faces in its information sharing efforts. These challenges include obtaining security clearances for port security stakeholders and creating effective working relationships with clearly defined roles and responsibilities. In our past work, we found the lack of federal security clearances among area committee members had been routinely cited as a barrier to information sharing. In turn, this inability to share classified information may limit the ability to deter, prevent, and respond to a potential terrorist attack. The Coast Guard, having lead responsibility in coordinating maritime information, has made improvements to its program for granting clearances to area committee members and additional clearances have been granted to members with a need to know as a result. In addition, the SAFE Port Act includes a specific provision requiring DHS to sponsor and expedite security clearances for participants in interagency operational centers. However, the extent to which these efforts will ultimately improve information sharing is not yet known. As the Coast Guard expands its relationships with multiple interagency partners, collaborating and sharing information effectively under new structures and procedures will be important. While some of the existing centers achieved results with existing interagency relationships, other high-priority ports might face challenges establishing new working relationships among port stakeholders and implementing their own interagency operational centers. Finally, addressing potential overlapping responsibilities —such as leadership roles for the Coast Guard and its interagency partners—will be important to ensure that actions across the various agencies are clear and coordinated. As part of its operations, the Coast Guard has also imposed additional activities to provide overall port security. The Coast Guard’s operations order, Operation Neptune Shield, first released in 2003, specifies the level of security activities to be conducted. The order sets specific activities for each port; however, the amount of each activity is established based on the port’s specific security concerns. Some examples of security activities include conducting waterborne security patrols, boarding high-interest vessels, escorting vessels into ports, and enforcing fixed security zones. When a port security level increases, the amount of activity the Coast Guard must conduct also increases. The Coast Guard uses monthly field unit reports to indicate how many of its security activities it is able to perform. Our review of these field unit reports indicates that many ports are having difficulty meeting their port security responsibilities, with resource constraints being a major factor. In an effort to meet more of its security requirements, the Coast Guard uses a strategy that includes partnering with other government agencies, adjusting its activity requirements, and acquiring resources. Despite these efforts, many ports are still having difficulty meeting their port security requirements. The Coast Guard is currently studying what resources are needed to meet certain aspects of its port security program, but to enhance the effectiveness of its port security operations, a more comprehensive study to determine all additional resources and changes to strategy to meet minimum security requirements may be needed. We will be issuing a report on this issue in the near future. Area plans—another MTSA requirement—and their specific provisions have been specified by regulation and Coast Guard directive. Implementing regulations for MTSA specified that area plans include, among other things, operational and physical security measures in place at the port under different security levels, details of the security incident command and response structure, procedures for responding to security threats including provisions for maintaining operations in the port, and procedures to facilitate the recovery of the marine transportation system after a security incident. A Coast Guard Navigation and Vessel Inspection Circular (NVIC) provided a common template for area plans and specified the responsibilities of port stakeholders under them. As of September 2007, 46 area plans are in place at ports around the country. The Coast Guard approved the plans by June 1, 2004, and MTSA requires that they be updated at least every 5 years. The SAFE Port Act added a requirement to area plans, which specified that they include recovery issues by identifying salvage equipment able to restore operational trade capacity. This requirement was established to ensure that the waterways are cleared and the flow of commerce through United States ports is reestablished as efficiently and quickly as possible after a security incident. While the Coast Guard sets out the general priorities for recovery operations in its guidelines for the development of area plans, we have found that this guidance offers limited instruction and assistance for developing procedures to address recovery situations. The Maritime Infrastructure Recovery Plan (MIRP) recognizes the limited nature of the Coast Guard’s guidance and notes the need to further develop recovery aspects of the area plans. The MIRP provides specific recommendations for developing the recovery sections of the area plans. The area plans that we reviewed often lacked recovery specifics and none had been updated to reflect the recommendations made in the MIRP. The Coast Guard is currently updating the guidance for the area plans and aims to complete the updates by the end of calendar year 2007 so that the guidance will be ready for the mandatory 5-year re-approval of the area plans in 2009. Coast Guard officials commented that any changes to the recovery section would need to be consistent with the national protocols developed for the SAFE Port Act. Additionally, related to recovery planning, the Coast Guard and CBP have developed specific interagency actions focused on response and recovery. This should provide the Coast Guard and CBP with immediate security options for the recovery of ports and commerce. Further, area plans generally do not address natural disasters (i.e., they do not have an all-hazards approach). In a March 2007 report examining how ports are dealing with planning for natural disasters such as hurricanes and earthquakes, we noted that area plans cover security issues but not other issues that could have a major impact on a port’s ability to support maritime commerce. As currently written, area plans are concerned with deterring and, to a lesser extent, responding to security incidents. We found, however, that unified consideration of all risks—natural and man- made—faced by a port may be beneficial. Because of the similarities between the consequences of terrorist attacks and natural or accidental disasters, much of the planning for protection, response, and recovery capabilities is similar across all emergency events. Combining terrorism and other threats can thus enhance the efficiency of port planning efforts. This approach also allows port stakeholders to estimate the relative value of different mitigation alternatives. The exclusion of certain risks from consideration, or the separate consideration of a particular type of risk, raises the possibility that risks will not be accurately assessed or compared, and that too many or too few resources will be allocated toward mitigation of a particular risk. As ports continue to revise and improve their planning efforts, available evidence indicates that by taking a systemwide approach and thinking strategically about using resources to mitigate and recover from all forms of disaster, ports will be able to achieve the most effective results. Area plans provide a useful foundation for establishing an all-hazards approach. While the SAFE Port Act does not call for expanding area plans in this manner, it does contain a requirement that natural disasters and other emergencies be included in the scenarios to be tested in the Port Security Exercise Program. On the basis of our prior work, we found there are challenges in using area committees and plans as the basis for broader all- hazards planning. These challenges include determining the extent that security plans can serve all-hazards purposes. We recommended that DHS encourage port stakeholders to use the existing security-oriented area committees and MTSA-required area plans to discuss all-hazards planning. DHS concurred with this recommendation. The Coast Guard Captain of the Port and the area committee are required by MTSA regulations to conduct or participate in exercises to test the effectiveness of area plans annually, with no more than 18 months between exercises. These exercises—which have been conducted for the past several years—are designed to continuously improve preparedness by validating information and procedures in the area plan, identifying weaknesses and strengths, and practicing command and control within an incident command/unified command framework. In August 2005, the Coast Guard and the TSA initiated the Port Security Training Exercise Program (PortSTEP)—an exercise program designed to involve the entire port community, including public governmental agencies and private industry, and intended to improve connectivity of various surface transportation modes and enhance area plans. Between August 2005 and October 2007, the Coast Guard expected to conduct PortSTEP exercises for 40 area committees and other port stakeholders. Additionally, the Coast Guard initiated its own Area Maritime Security Training and Exercise Program (AMStep) in October 2005. This program was also designed to involve the entire port community in the implementation of the Area Maritime Security Plan (AMSP). Between the two programs, PortSTEP and AMStep, all Area Maritime Security Committees (AMSCs) have received a port security exercise each year since inception. The SAFE Port Act included several new requirements related to security exercises, such as establishing a Port Security Exercise Program to test and evaluate the capabilities of governments and port stakeholders to prevent, prepare for, mitigate against, respond to, and recover from acts of terrorism, natural disasters, and other emergencies at facilities that MTSA regulates. The act also required the establishment of a port security exercise improvement plan process that would identify, disseminate, and monitor the implementation of lessons learned and best practices from port security exercises. Though we have not specifically examined compliance with these new requirements, our work in examining past exercises suggests that implementing a successful exercise program faces several challenges. These challenges include setting the scope of the program to determine how exercise requirements in the SAFE Port Act differ from area committee exercises that are currently performed. This is especially true for incorporating recovery scenarios into exercises. In this past work, we also found that Coast Guard terrorism exercises frequently focused on prevention and awareness, but often did not include recovery activities. According to the Coast Guard, with the recent emphasis on planning for recovery operations, it has held several exercises over the past year that have included in part, or solely, recovery activities. It will be important that future exercises also focus on recovery operations so public and private stakeholders can cover gaps that might hinder commerce after a port incident. Other long-standing challenges include completing after- action reports in a timely and thorough manner and ensuring that all relevant agencies participate. According to the Coast Guard, as the primary sponsor of these programs, it faces a continuing challenge in getting comprehensive participation in these exercises. The security of domestic ports also depends upon security at foreign ports where cargoes bound for the United States originate. To help secure the overseas supply chain, MTSA required the Coast Guard to develop a program to assess security measures in foreign ports and, among other things, recommend steps necessary to improve security measures in those ports. The Coast Guard established this program, called the International Port Security Program, in April 2004. Under this program, the Coast Guard and host nations review the implementation of security measures in the host nations’ ports against established security standards, such as the International Maritime Organization’s International Ship and Port Facility Security (ISPS) Code. Coast Guard teams have been established to conduct country visits, discuss security measures implemented, and collect and share best practices to help ensure a comprehensive and consistent approach to maritime security in ports worldwide. The conditions of these visits, such as timing and locations, are negotiated between the Coast Guard and the host nation. Coast Guard officials also make annual visits to the countries to obtain additional observations on the implementation of security measures and ensure deficiencies found during the country visits are addressed. Both the SAFE Port Act and other congressional directions have called for the Coast Guard to increase the pace of its visits to foreign countries. Although MTSA did not set a time frame for completion of these visits, the Coast Guard initially set a goal to visit the approximately 140 countries that conduct maritime trade with the United States by December 2008. In September 2006, the conference report accompanying the fiscal year 2007 DHS Appropriations Act directed the Coast Guard to “double the amount” at which it was conducting its visits. Subsequently, in October 2006, the SAFE Port Act required the Coast Guard to reassess security measures at the foreign ports every 3 years. Coast Guard officials said they will comply with the more stringent requirements and will reassess countries on a 2-year cycle. With the expedited pace, the Coast Guard now expects to assess all countries by March 2008, after which reassessments will begin. We are currently conducting a review of the Coast Guard’s International Port Security Program that evaluates the Coast Guard’s implementation of international enforcement programs. The report, expected to be issued in early 2008, will cover issues related to the program, such as the extent to which the program is using a risk-based approach in carrying out its work, what challenges the program faces as it moves forward, and the extent to which the observations collected during the country visits are used by other programs such as the Coast Guard’s port state control inspections and high interest vessel boarding programs. As of September 2007, the Coast Guard reported that it has visited 109 countries under this program and plans to visit another 29 more by March 2008. For the countries for which the Coast Guard has issued a final report, the Coast Guard reported that most had “substantially implemented the security code,” while a few countries were found to have not yet implemented the ISPS Code and will be subject to a reassessment or other sanctions. The Coast Guard also found several facilities needing improvements in areas such as access controls, communication devices, fencing, and lighting. While our review is still preliminary, Coast Guard officials told us that to plan and prepare for the next cycle of reassessments that are to begin next year, they are considering modifying their current visit methodology to incorporate a risk-based approach to prioritize the order and intensity of the next round of country visits. To do this, they have consulted with a contractor to develop an updated country risk prioritization model. Under the previous model, the priority assigned to a country for a visit was weighted heavily towards the volume of U.S. trade with that country. The new model being considered is to incorporate other factors, such as corruption and terrorist activity levels within the countries. Program officials told us that the details of this revised approach have yet to be finalized. Coast Guard officials told us that as they complete the first round of visits and move into the next phase of revisits, challenges still exist in implementing the program. One challenge identified was that the faster rate at which foreign ports will now be reassessed will require hiring and training new staff—a challenge the officials expect will be made more difficult because experienced personnel who have been with the program since its inception are being transferred to other positions as part of the Coast Guard’s rotational policy. These officials will need to be replaced with newly assigned personnel. Reluctance by some countries to allow the Coast Guard to visit their ports due to concerns over sovereignty was another challenge cited by program officials in completing the first round of visits. According to these officials, before permitting Coast Guard officials to visit their ports, some countries insisted on visiting and assessing a sample of U.S. ports. The Coast Guard was able to accommodate their request through the program’s reciprocal visit feature in which the Coast Guard hosts foreign delegations to visit U.S. ports and observe ISPS Code implementation in the United States. This subsequently helped gain the cooperation of the countries in hosting a Coast Guard visit to their own ports. However, as they begin to revisit countries as part of the program’s next phase, program officials stated that sovereignty concerns may still be an issue. Some countries may be reluctant to host a comprehensive country visit on a recurring basis because they believe the frequency—once every 2 to 3 years—too high. Sovereignty also affects the conditions of the visits, such as timing and locations, because such visits are negotiated between the Coast Guard and the host nation. Thus the Coast Guard team making the visit could be precluded from seeing locations that are not in compliance. Another challenge program officials cite is having limited ability to help countries build on or enhance their capacity to implement the ISPS Code requirements. For example, the SAFE Port Act required that GAO report on various aspects of port security in the Caribbean Basin. We earlier reported that although the Coast Guard found that most of the countries had substantially implemented the ISPS Code, some facilities needed to make improvements or take additional measures. In addition, our discussions with facility operators and government officials in the region indicated that assistance—such as additional training—would help enhance their port security. Program officials stated that while their visits provide opportunities for them to identify potential areas to improve or help sustain the security measures put in place, other than sharing best practices or providing presentations on security practices, the program does not currently have the resources to directly assist countries with more in-depth training or technical assistance. To overcome this, program officials have worked with other agencies (e.g., the Departments of Defense and State) and international organizations (e.g., the Organization of American States) to secure funding for training and assistance to countries where port security conferences have been held (e.g., the Dominican Republic and the Bahamas). Program officials indicated that as part of reexamining the approach for the program’s next phase, they will also consider possibilities to improve the program’s ability to provide training and capacity building to countries when a need is identified. To improve the security at individual facilities at ports, many long-standing programs are underway. However, new challenges to their successful implementation have emerged. The Coast Guard is required to conduct assessments of security plans and facility compliance inspections, but faces challenges in staffing and training to meet the SAFE Port Act’s additional requirements such as the sufficiency of trained personnel and guidance to conduct facility inspections. TSA’s TWIC program has addressed some of its initial program challenges, but will continue to face additional challenges as the program rollout continues. Many steps have been taken to ensure that transportation workers are properly screened, but redundancies in various background checks have decreased efficiency and highlighted the need for increased coordination. MTSA and its implementing regulations required owners and operators of certain maritime facilities (e.g., power stations, chemical manufacturing facilities, and refineries that are located on waterways and receive foreign vessels) to conduct assessments of their security vulnerabilities, develop security plans to mitigate these vulnerabilities, and implement measures called for in the security plans by July 1, 2004. Under the Coast Guard regulations, these plans are to include items such as measures for access control, responses to security threats, and drills and exercises to train staff and test the plan. The plans are “performance-based,” meaning that the Coast Guard has specified the outcomes it is seeking to achieve and has given facilities responsibility for identifying and delivering the measures needed to achieve these outcomes. Under MTSA, Coast Guard guidance calls for the Coast Guard to conduct one on-site facility inspection annually to verify continued compliance with the plan. The SAFE Port Act, enacted in 2006, required the Coast Guard to conduct at least two inspections—one of which was to be unannounced—of each facility annually. We currently have ongoing work that reviews the Coast Guard’s oversight strategy under MTSA and SAFE Port Act requirements. The report, expected later this year, will cover, among other things, the extent to which the Coast Guard has met its inspection requirements and found facilities to be in compliance with its security plans, the sufficiency of trained inspectors and guidance to conduct facility inspections, and aspects of the Coast Guard’s overall management of its MTSA facility oversight program, particularly documenting compliance activities. Our work is preliminary. However, according to our analysis of Coast Guard records and statements from officials, the Coast Guard seems to have conducted facility compliance exams annually at most—but not all— facilities. Redirection of staff to a higher-priority mission, such as Hurricane Katrina emergency operations, may have accounted for some facilities not having received an annual exam. The Coast Guard also conducted a number of unannounced inspections—about 4,500 in 2006, concentrated in around 1,200 facilities—prior to the SAFE Port Act’s passage. According to officials we spoke with, the Coast Guard selected facilities for unannounced inspection based on perceived risk and inspection convenience (e.g., if inspectors were already at the facility for another purpose). The Coast Guard has identified facility plan compliance deficiencies in about one-third of facilities inspected each year, and the deficiencies identified are concentrated in a small number of categories (e.g., failure to follow the approved plan for ensuring facility access control, record keeping, or meeting facility security officer requirements). We are still in the process of reviewing the data Coast Guard uses to document compliance activities and will have additional information in our forthcoming report. Sectors we visited reported having adequate guidance and staff for conducting consistent compliance exams, but until recently, little guidance on conducting unannounced inspections, which are often incorporated into work while performing other mission tasks. Lacking guidance on unannounced inspections, the process for conducting one varied considerably in the sectors we visited. For example, inspectors in one sector found the use of a telescope effective in remotely observing facility control measures (such as security guard activities), but these inspectors primarily conduct unannounced inspections as part of vehicle patrols. Inspectors in another sector conduct unannounced inspections at night, going up to the security gate and querying personnel about their security knowledge (e.g., knowledge of high-security level procedures). As we completed our fieldwork, the Coast Guard issued a Commandant message with guidance on conducting unannounced inspections. This message may provide more consistency, but how the guidance will be applied and its impact on resource needs remain uncertain. Coast Guard officials said they plan to revise their primary circular on facility oversight by February 2008. They are also planning to revise MTSA regulations to conform to SAFE Port Act requirements in 2009 (in time for the reapproval of facility security plans) but are behind schedule. We recommended in June 2004 that the Coast Guard evaluate its compliance inspection efforts taken during the initial 6-month period after July 1, 2004, and use the results to strengthen its long-term strategy for ensuring compliance. The Coast Guard agreed with this recommendation. Nevertheless, based on our ongoing work, it appears that the Coast Guard has not conducted a comprehensive evaluation of its oversight program to identify strengths or target areas for improvement after 3 years of program implementation. Our prior work across a wide range of public and private- sector organizations shows that high-performing organizations continuously assess their performance with information about results based on their activities. For decision makers to assess program strategies, guidance, and resources, they need accurate and complete data reflecting program activities. We are currently reviewing the accuracy and completeness of Coast Guard compliance data and will report on this issue later this year. The Secretary of DHS was required by MTSA to, among other things, issue a transportation worker identification card that uses biometrics, such as fingerprints, to control access to secure areas of seaports and vessels. TSA had already initiated a program to create an identification credential that could be used by workers in all modes of transportation when MTSA was enacted. This program, called the TWIC program, is designed to collect personal and biometric information to validate workers’ identities, conduct background checks on transportation workers to ensure they do not pose a threat to security, issue tamper-resistant biometric credentials that cannot be counterfeited, verify these credentials using biometric access control systems before a worker is granted unescorted access to a secure area, and revoke credentials if disqualifying information is discovered, or if a card is lost, damaged, or stolen. TSA, in partnership with the Coast Guard, is focusing initial implementation on the maritime sector. We have previously reported on the status of this program and the challenges that it faces. Most recently, we reported that TSA has made progress in implementing the TWIC program and addressing problems we previously identified regarding contract planning and oversight and coordination with stakeholders. For example, TSA reported that it added staff with program and contract management expertise to help oversee the contract and developed plans for conducting public outreach and education efforts. The SAFE Port Act required TSA to implement TWIC at the 10 highest-risk ports by July 1, 2007, conduct a pilot program to test TWIC access control technologies in the maritime environment; issue regulations requiring TWIC card readers based on the findings of the pilot; and periodically report to Congress on the status of the program. However, TSA did not meet the July 1 deadline, citing the need to conduct additional testing of the systems and technologies that will be used to enroll the estimated 770,000 workers that will be required to obtain a TWIC card. According to TSA officials, the agency plans to complete this testing and begin enrolling workers at the Port of Wilmington in October 2007, and begin enrolling workers at additional ports soon thereafter. TSA is also in the process of conducting a pilot program to test TWIC access control technologies in the maritime environment that will include a variety of maritime facilities and vessels in multiple geographic locations. According to TSA, the results of the pilot program will help the agency issue future regulations that will require the installation of access control systems necessary to read the TWIC cards. It is important that TSA establish clear and reasonable time frames for implementing TWIC as the agency begins enrolling workers and issuing TWIC cards in October. TSA could face additional challenges as the TWIC implementation progresses; these include monitoring the effectiveness of contract planning and oversight. TSA has developed a quality assurance surveillance plan with performance metrics that the enrollment contractor must meet to receive payment. The agency has also taken steps to strengthen government oversight of the TWIC contract by adding staff with program and contract management expertise. However, the effectiveness of these steps will not be clear until implementation of the TWIC program begins. Ensuring a successful enrollment process for the program presents another challenge. According to TSA, the agency has made communication and coordination top priorities by taking actions such as establishing a TWIC stakeholder communication committee and requiring the enrollment contractor to establish a plan for coordinating and communicating with all stakeholders who will be involved in the program. Finally, TSA will have to address access control technologies to ensure that the program is implemented effectively. It will be important that TSA’s TWIC access control technology pilot ensure that these technologies work effectively in the maritime environment before facilities and vessels will be required to implement them. Since the terrorist attacks on September 11, the federal government has taken steps to ensure that transportation workers, many of whom transport hazardous materials or have access to secure areas in locations such as ports, are properly screened to ensure they do not pose a security risk. Concerns have been raised, however, that transportation workers may face a variety of background checks, each with different standards. In July 2004, the 9/11 Commission reported that having too many different biometric standards, travel facilitation systems, credentialing systems, and screening requirements hampers the development of information crucial for stopping terrorists from entering the country, is expensive, and is inefficient. The commission recommended that a coordinating body raise standards, facilitate information-sharing, and survey systems for potential problems. In August 2004, Homeland Security Presidential Directive 11 announced a new U.S. policy to “implement a coordinated and comprehensive approach to terrorist-related screening—in immigration, law enforcement, intelligence, counterintelligence, and protection of the border, transportation systems, and critical infrastructure—that supports homeland security, at home and abroad.” DHS components have begun a number of their own background check initiatives. For example, in January 2007, TSA determined that the background checks required for three other DHS programs satisfied the background check requirement for the TWIC program. That is, an applicant who has already undergone a background check in association with any of these three programs does not have to undergo an additional background check and pays a reduced fee to obtain a TWIC card. Similarly, the Coast Guard plans to consolidate four credentials and require that all pertinent information previously submitted by an applicant at a Coast Guard Regional Examination Center will be forwarded by the center to TSA through the TWIC enrollment process. In April 2007, we completed a study of DHS background check programs as part of a SAFE Port Act requirement to do so. We found that the six programs we reviewed were conducted independently of one another, collected similar information, and used similar background check processes. Further, each program operated separate enrollment facilities to collect background information and did not share it with the other programs. We also found that DHS did not track the number of workers who, needing multiple credentials, were subjected to multiple background check programs. Because DHS is responsible for a large number of background check programs, we recommended that DHS ensure that its coordination plan includes implementation steps, time frames, and budget requirements; discusses potential costs/benefits of program standardization; and explores options for coordinating and aligning background checks within DHS and other federal agencies. DHS concurred with our recommendations and continues to take steps— both at the department level and within its various agencies—to consolidate, coordinate, and harmonize such background check programs. At the department level, DHS created SCO in July 2006 to coordinate DHS background check programs. SCO is in the early stages of developing its plans for this coordination. In December 2006, SCO issued a report identifying common problems, challenges, and needed improvements in the credentialing programs and processes across the department. The office awarded a contract in April 2007 that will provide the methodology and support for developing an implementation plan to include common design and comparability standards and related milestones to coordinate DHS screening and credentialing programs. Since April 2007, DHS and SCO signed a contract to produce three deliverables to align its screening and credentialing activities, set a method and time frame for applying a common set of design and comparability standards, and eliminate redundancy through harmonization. These three deliverables are as follows: Credentialing framework: A framework completed in July 2007 that describes a credentialing life-cycle of registration and enrollment, eligibility vetting and risk assessment, issuance, expiration and revocation, and redress. This framework was to incorporate risk-based levels or criteria, and an assessment of the legal, privacy, policy, operational, and technical challenges. Technical review: An assessment scheduled for completion in October 2007 is to be completed by the contractor in conjunction with the DHS Office of the Chief Information Officer. This is to include a review of the issues present in the current technical environment and the proposed future technical environment needed to address those issues, and provide recommendations for targeted investment reuse and key target technologies. Transition plan: A plan scheduled to be completed in November 2007 is to outline the projects needed to actualize the framework, including identification of major activities, milestones, and associated timeline and costs. Stakeholders in this effort include multiple components of DHS and the Departments of State and Justice. In addition, the DHS Office of the Chief Information Officer (CIO) and the director of SCO issued a memo in May 2007 to promote standardization across screening and credentialing programs. In this memo, DHS indicated that (1) programs requiring the collection and use of fingerprints to vet individuals will use the Automated Biometric Identification System (IDENT); (2) these programs are to reuse existing or currently planned and funded infrastructure for the intake of identity information to the greatest extent possible; (3) its CIO is to establish a procurement plan to ensure that the department can handle a large volume of automated vetting from programs currently in the planning phase; and (4) to support the sharing of databases and potential consolidation of duplicative applications, the Enterprise Data Management Office is currently developing an inventory of biographic data assets that DHS maintains to support identity management and screening processes. While continuing to consolidate, coordinate, and harmonize background check programs, DHS will likely face additional challenges, such as ensuring that its plans are sufficiently complete without being overly restrictive, and lack of information regarding the potential costs and benefits associated with the number of redundant background checks. SCO will be challenged to coordinate DHS’s background check programs in such a way that any common set of standards developed to eliminate redundant checks meets the varied needs of all the programs without being so strict that it unduly limits the applicant pool or so intrusive that potential applicants are unwilling to take part. Without knowing the potential costs and benefits associated with the number of redundant background checks that harmonization would eliminate, DHS lacks the performance information that would allow its program managers to compare their program results with goals. Thus, DHS cannot be certain where to target program resources to improve performance. As we recommended, DHS could benefit from a plan that includes, at a minimum, a discussion of the potential costs and benefits associated with the number of redundant background checks that would be eliminated through harmonization. Through the development of strategic plans, human capital strategies, and performance measures, several container security programs have been established and matured. However, these programs continue to face technical and management challenges in implementation. As part of its layered security strategy, CBP developed the Automated Targeting System as a decision support tool to assess the risks of individual cargo containers. ATS is a complex mathematical model that uses weighted rules that assign a risk score to each arriving shipment based on shipping information (e.g., manifests, bills of lading, and entry data). Although the program has faced quality assurance challenges from its inception, CBP has made significant progress in addressing these challenges. CBP’s in- bond program does not collect detailed information at the U.S. port of arrival that could aid in identifying cargo posing a security risk and promote the effective use of inspection resources. In the past, CSI has lacked sufficient staff to meet program requirements. C-TPAT has faced challenges with validation quality and management in the past, in part due to its rapid growth. The Department of Energy’s (DOE) Megaports Initiative faces ongoing operational and technical challenges in the installation and maintenance of radiation detection equipment at ports. In addition, implementing the Secure Freight Initiative and the 9/11 Commission Act of 2007 presents additional challenges for the scanning of cargo containers inbound to the United States. CBP is responsible for preventing terrorists and weapons of mass destruction from entering the United States. As part of this responsibility, CBP addresses the potential threat posed by the movement of oceangoing cargo containers. To perform this mission, CBP officers at seaports utilize officer knowledge and CBP automated systems to assist in determining which containers entering the country will undergo inspections, and then perform the necessary level of inspection of each container based upon risk. To assist in determining which containers are to be subjected to inspection, CBP uses a layered security strategy that attempts to focus resources on potentially risky cargo shipped in containers while allowing other ocean going containers to proceed without disrupting commerce. ATS is one key element of this strategy. CBP uses ATS as a decision support tool to review documentation, including electronic manifest information submitted by the ocean carriers on all arriving shipments, and entry data submitted by brokers to develop risk scores that help identify containers for additional inspection. CBP requires the carriers to submit manifest information 24 hours prior to a United States-bound sea container being loaded onto a vessel in a foreign port. CBP officers use these scores to help them make decisions on the extent of documentary review or additional inspection as required. We have conducted several reviews of ATS and made recommendations for its improvement. Consistent with these recommendations, CBP has implemented a number of important internal controls for the administration and implementation of ATS. For example, CBP (1) has established performance metrics for ATS, (2) is manually comparing the results of randomly conducted inspections with the results of inspections resulting from ATS analysis of the shipment data, and (3) has developed and implemented a testing and simulation environment to conduct computer-generated tests of ATS. Since our last report on ATS, the SAFE Port Act required that the CBP Commissioner take additional actions to further improve ATS. These requirements included steps such as (1) having an independent panel review the effectiveness and capabilities of ATS; (2) considering future iterations of ATS that would incorporate smart features; (3) ensuring that ATS has the capability to electronically compare manifest and other available data to detect any significant anomalies and facilitate their resolution; (4) ensuring that ATS has the capability to electronically identify, compile, and compare select data elements following a maritime transportation security incident; and (5) developing a schedule to address recommendations made by GAO and the Inspectors General of the Department of the Treasury and DHS. CBP’s in-bond system—which allows goods to transit the United States without officially entering U.S. commerce—must balance the competing goals of providing port security, facilitating trade, and collecting trade revenues. However, we have earlier reported that CBP’s management of the system has impeded efforts to manage security risks. Specifically, CBP does not collect detailed information on in-bond cargo at the U.S. port of arrival that could aid in identifying cargo posing a security risk and promote effective use of inspection resources. The in-bond system is designed to facilitate the flow of trade throughout the United States and is estimated to be widely used. The U.S. customs system allows cargo to move from the U.S. arrival port, without appraisal or payment of duties to another U.S. port for official entry into U.S. commerce or for exportation. In-bond regulations currently permit bonded carriers from 15 to 60 days, depending on the mode of shipment, to reach their final destination and allow them to change a shipment’s final destination without notifying CBP. The in-bond system allows the trade community to avoid congestion and delays at U.S. seaports whose infrastructure has not kept pace with the dramatic growth in trade volume. In-bond facilitates trade by allowing importers and shipping agents the flexibility to move cargo more efficiently. Using the number of in-bond transactions reported by CBP for the 6-month period of October 2004 to March 2005, we found over 6.5 million in-bond transactions were initiated nationwide. Some CBP port officials have estimated that in-bond shipments represent from 30 percent to 60 percent of goods received at their ports. As discussed earlier in this testimony, CBP uses manifest information it receives on all cargo arriving at U.S. ports (including in-bond cargo) as input for ATS scoring to aid in identifying security risks and setting inspection priorities. For regular cargo, the ATS score is updated with more detailed information as the cargo makes official entry at the arrival port. For in-bond cargo, the ATS scores generally are not updated until these goods move from the port of arrival to the destination port for official entry into United States commerce, or not updated at all for cargo that is intended to be exported. As a result, in-bond goods might transit the United States without having the most accurate ATS risk score. Entry information frequently changes the ATS score for in-bond goods. For example, CBP provided data for four major ports comparing the ATS score assigned to in-bond cargo at the port of arrival based on the manifest to the ATS score given after goods made official entry at the destination port. These data show that for the four ports, the ATS score based on the manifest information stayed the same an average of 30 percent of the time after being updated with entry information, ATS scores increased an average of 23 percent of the time and decreased an average of 47 percent of the time. A higher ATS score can result in higher priority being given to cargo for inspection than otherwise would be given based solely on the manifest information. A lower ATS score can result in cargo being given a lower priority for inspection and potentially shift inspection resources to cargo deemed a higher security risk. Without having the most accurate ATS score, in-bond goods transiting the United States pose a potential security threat because higher-risk cargo may not be identified for inspection at the port of arrival. In addition, scarce inspection resources may be misdirected to in-bond goods that a security score based on better information might have shown did not warrant inspection. We earlier recommended that the Commissioner of CBP take action in three areas to improve the management of the in-bond program, which included collecting and using improved information on in-bond shipments to update the ATS score for in-bond movements at the arrival port and enable better informed decisions affecting security, trade and revenue collection. DHS agreed with most of our recommendations. According to CBP, they are in the process of developing an in-bond weight set to be utilized to further identify cargo posing a security risk. The weight set is being developed based on expert knowledge, analysis of previous in-bond seizures, and creation of rules based on in-bond concepts. The SAFE Port Act of 2006 contains provisions related to securing the international cargo supply chain, including provisions related to the movement of in-bond cargo. Specifically, it requires that CBP submit a report to several congressional committees on the in-bond system that includes an assessment of whether ports of arrival should require additional information for in-bond cargo, a plan for tracking in-bond cargo in CBP’s Automated Commercial Environment information system, and assessment of the personnel required to ensure reconciliation of in-bond cargo between arrival port and destination port. The report must also contain an assessment of the feasibility of reducing transit time while traveling in-bond, and an evaluation of the criteria for targeting and examining in-bond cargo. Although the report was due June 30, 2007, CBP has not yet finalized the report and released it to Congress. CPB initiated its CSI program to detect and deter terrorists from smuggling weapons of mass destruction (WMD) via cargo containers before they reach domestic seaports in January 2002. The SAFE Port Act formalized the CSI program into law. Under CSI, foreign governments sign a bilateral agreement with CBP to allow teams of U.S. customs officials to be stationed at foreign seaports to identify cargo container shipments at risk of containing WMD. CBP personnel use automated risk assessment information and intelligence to target to identify those at risk containing WMD. When a shipment is determined to be high risk, CBP officials refer it to host government officials who determine whether to examine the shipment before it leaves their seaport for the United States. In most cases, host government officials honor the U.S. request by examining the referred shipments with nonintrusive inspection equipment and, if they deem necessary, by opening the cargo containers to physically search the contents inside. CBP planned to have a total of 58 seaports by the end of fiscal year 2007. Our 2003 and 2005 reports on the CSI program found both successes and challenges faced by CBP in implementing the program. Since our last CSI report in 2005, CBP has addressed some of the challenges we identified and has taken steps to improve the CSI program. Specifically, CBP contributed to the Strategy to Enhance International Supply Chain Security that DHS issued in July 2007, which addressed a SAFE Port Act requirement and filled an important gap—between broad national strategies and program-specific strategies, such as for CSI—in the strategic framework for maritime security that has evolved since 9/11. In addition, in 2006 CBP issued a revised CSI strategic plan for 2006 to 2011, which added three critical elements that we had identified in our April 2005 report as missing from the plan’s previous iteration. In the revised plan, CBP described how performance goals and measures are related to CSI objectives, how CBP evaluates CSI program operations, and what external factors beyond CBP’s control could affect program operations and outcomes. Also, by expanding CSI operations to 58 seaports by the end of September 2007, CBP would have met its objective of expanding CSI locations and program activities. CBP projected that at the end of fiscal year 2007 between 85 and 87 percent of all U.S. bound shipments in containers will pass through CSI ports where the risk level of the container cargo is assessed and the contents are examined as deemed necessary. Although CBP’s goal is to review information about all U.S.-bound containers at CSI seaports for high-risk contents before the containers depart for the United States, we reported in 2005 that the agency has not been able to place enough staff at some CSI ports to do so. Also, the SAFE Port Act required DHS to develop a human capital management plan to determine adequate staffing levels in U.S. and CSI ports. CBP has developed a human capital plan, increased the number of staff at CSI ports, and provided additional support to the deployed CSI staff by using staff in the United States to screen containers for various risk factors and potential inspection. With these additional resources, CBP reports that manifest data for all US-bound container cargo are reviewed using ATS to determine whether the container is at high risk of containing WMD. However, the agency faces challenges in ensuring that optimal numbers of staff are assigned to CSI ports due in part to its reliance on placing staff overseas at CSI ports without systematically determining which functions could be performed overseas and which could be performed domestically. Also, in 2006 CBP improved its methods for conducting onsite evaluations of CSI ports, in part by requiring CSI teams at the seaports to demonstrate their proficiency at conducting program activities and by employing electronic tools designed to assist in the efficient and systematic collection and analysis of data to help in evaluating the CSI team’s proficiency. In addition, CBP continued to refine the performance measures it uses to track the effectiveness of the CSI program by streamlining the number of measures it uses to six, modifying how one measure is calculated to address an issue we identified in our April 2005 report; and developing performance targets for the measures. We are continuing to review these assessment practices as part of our ongoing review of the CSI program, and expect to report on the results of this effort shortly. Similar to our recommendation in a previous CSI report, the SAFE Port Act called upon DHS to establish minimum technical criteria for the use of nonintrusive inspection equipment in conjunction with CSI. The act also directs DHS to require that seaports receiving CSI designation operate such equipment in accordance with these criteria and with standard operating procedures developed by DHS. CBP officials stated that their agency faces challenges in implementing this requirement due to sovereignty issues and the fact that the agency is not a standard setting organization, either for equipment or for inspections processes or practices. However, CBP has developed minimum technical standards for equipment used at domestic ports and the World Customs Organization (WCO) had described issues—not standards—to consider when procuring inspection equipment. Our work suggests that CBP may face continued challenges establishing equipment standards and monitoring host government operations, which we are also examining in our ongoing review of the CSI program. CBP initiated C-TPAT in November 2001 to complement other maritime security programs as part of the agency’s layered security strategy. In October 2006, the SAFE Port Act formalized C-TPAT into law. C-TPAT is a voluntary program that enables CBP officials to work in partnership with private companies to review the security of their international supply chains and improve the security of their shipments to the United States. In return for committing to improve the security of their shipments by joining the program, C-TPAT members receive benefits that result in the likelihood of reduced scrutiny of their shipments, such as a reduced number of inspections or shorter wait times for their shipments. CBP uses information about C-TPAT membership to adjust risk-based targeting of these members shipments in ATS. As of July 2007, CBP had certified more than 7,000 companies that import goods via cargo containers through U.S. seaports—which accounted for approximately 45 percent of all U.S. imports—and validated the security practices of 78 percent of these certified participants. We reported on the progress of the C-TPAT program in 2003 and 2005 and recommended that CBP develop a strategic plan and performance measures to track the program’s status in meeting its strategic goals. DHS concurred with these recommendations. The SAFE Port Act also mandated that CBP develop and implement a 5-year strategic plan with outcome-based goals and performance measures for C-TPAT. CBP officials stated that they are in the process of updating their strategic plan for C-TPAT, which was issued in November 2004, for 2007 to 2012. This updated plan is being reviewed within CBP, but a time frame for issuing the plan has not been established. We recommended in our March 2005 report that CBP establish performance measures to track its progress in meeting the goals and objectives established as part of the strategic planning process. Although CBP has since put additional performance measures in place, CBP’s efforts have focused on measures regarding program participation and facilitating trade and travel. CBP has not yet developed performance measures for C-TPAT’s efforts aimed at ensuring improved supply chain security, which is the program’s purpose. In our previous work, we acknowledged that the C-TPAT program holds promise as part of a layered maritime security strategy. However, we also raised a number of concerns about the overall management of the program. Since our past reports, the C-TPAT program has continued to mature. The SAFE Port Act mandated that actions—similar to ones we had recommended in our March 2005 report—be taken to strengthen the management of the program. For example, the act included a new goal that CBP make a certification determination within 90 days of CBP’s receipt of a C-TPAT application, validate C-TPAT members’ security measures and supply chain security practices within 1 year of their certification, and revalidate those members no less than once in every 4 years. As we recommended in our March 2005 report, CBP has developed a human capital plan and implemented a records management system for documenting key program decisions. CBP has addressed C-TPAT staffing challenges by increasing the number of supply chain security specialists from 41 in 2005 to 156 in 2007. In February 2007, CBP updated its resource needs to reflect SAFE Port Act requirements, including that certification, validation, and revalidation processes be conducted within specified time frames. CBP believes that C-TPAT’s current staff of 156 supply chain security specialists will allow it to meet the act’s initial validation and revalidation goals for 2007 and 2008. If an additional 50 specialists authorized by the act are made available by late 2008, CBP expects to be able to stay within compliance of the act’s time frame requirements through 2009. In addition, CBP developed and implemented a centralized electronic records management system to facilitate information storage and sharing and communication with C-TPAT partners. This system—known as the C-TPAT Portal—enables CBP to track and ascertain the status of C-TPAT applicants and partners to ensure that they are certified, validated, and revalidated within required time frames. As part of our ongoing work, we are reviewing the data captured in Portal, including data needed by CBP management to assess the efficiency of C-TPAT operations and to determine compliance with its program requirements. These actions—dedicating resources to carry out certification and validation reviews and putting a system in place to track the timeliness of these reviews—should help CBP meet several of the mandates of the SAFE Port Act. We expect to issue a final report documenting results of this work shortly. Our 2005 report raised concerns about CBP granting benefits prematurely—before CBP had validated company practices. Related to this, the SAFE Port Act codified CBP’s policy of granting graduated benefits to C-TPAT members. Instead of granting new members full benefits without actual verification of their supply chain security, CBP implemented three tiers to grant companies graduated benefits based on CBP’s certification and validation of their security practices. Tier 1 benefits—a limited reduction in the score assigned in ATS—are granted to companies upon certification that their written description of their security profile meets minimum security criteria. Companies whose security practices CBP validates in an on-site assessment receive Tier 2 benefits that may include reduced scores in ATS, reduced cargo examinations, and priority searches of cargo. If CBP’s validation shows sustained commitment by a company to security practices beyond what is expected, the company receives Tier 3 benefits. Tier 3 benefits may include expedited cargo release at U.S. ports at all threat levels, further reduction in cargo examinations, priority examinations, and participation in joint incident management exercises. Our 2005 report also raised concerns about whether the validation process was rigorous enough. Similarly, the SAFE Port Act mandates that the validation process be strengthened, including setting a year time frame for completing validations. CBP initially set a goal of validating all companies within their first 3 years as C-TPAT members, but the program’s rapid growth in membership made the goal unachievable. CBP then moved to a risk-based approach to selecting members for validation, considering factors such as a company’s having foreign supply chain operations in a known terrorist area or involving multiple foreign suppliers. CBP further modified its approach to selecting companies for validation to achieve greater efficiency by conducting “blitz” operations to validate foreign elements of multiple members’ supply chains in a single trip. Blitz operations focus on factors such as C-TPAT members within a certain industry, supply chains within a certain geographic area, or foreign suppliers to multiple C-TPAT members. Risks remain a consideration, according to CBP, but the blitz strategy drives the decision of when a member company will be validated. In addition to taking these actions to efficiently conduct validations, CBP has periodically updated the minimum security requirements that companies must meet to be validated and is conducting a pilot program of using third-party contractors to conduct validation assessments. As part of our ongoing work, we are reviewing these actions, which are required as part of the SAFE Port Act, and other CBP efforts to enhance its C-TPAT validation process. The CSI and C-TPAT programs have provided a model for global customs security standards, but as other countries adopt the core principles of CSI and programs similar to C-TPAT, CBP may face new challenges. Foreign officials within the World Customs Organization and elsewhere have observed the CSI and C-TPAT programs as potential models for enhancing supply chain security. Also, CBP has taken a lead role in working with members of the domestic and international customs and trade community on approaches to standardizing supply chain security worldwide. As CBP has recognized, and we have previously reported, in security matters the United States is not self-contained, in either its problems or its solutions. The growing interdependence of nations requires policymakers to recognize the need to work in partnerships across international boundaries to achieve vital national goals. For this reason, CBP has committed through its strategic planning process to develop and promote an international framework of standards governing customs-to-customs relationships and customs-to-business relationships in a manner similar to CSI and C-TPAT, respectively. To achieve this, CBP has worked with foreign customs administrations through the WCO to establish a framework creating international standards that provide increased security of the global supply chain while facilitating international trade. The member countries of the WCO, including the United States, adopted such a framework, known as the WCO Framework of Standards to Secure and Facilitate Global Trade and commonly referred to as the SAFE Framework, in June 2005. The SAFE Framework internationalizes the core principles of CSI in creating global standards for customs security practices and promotes international customs-to-business partnership programs, such as C-TPAT. As of September 11, 2007, 148 WCO member countries had signed letters of intent to implement the SAFE Framework. CBP, along with the customs administrations of other countries and through the WCO, provides technical assistance and training to those countries that want to implement the SAFE Framework, but do not yet have the capacity to do so. The SAFE Framework enhances the CSI program by promoting the implementation of CSI-like customs security practices, including the use of electronic advance information requirements and risk-based targeting, in both CSI and non-CSI ports worldwide. The framework also lays the foundation for mutual recognition, an arrangement whereby one country can attain a certain level of assurance about the customs security standards and practices and business partnership programs of another country. In June 2007, CBP entered into the first mutual recognition arrangement of a business-to-customs partnership program with the New Zealand Customs Service. This arrangement stipulates that members of one country’s business-to-customs program be recognized and receive similar benefits from the customs service of the other country. CBP is pursuing similar arrangements with Jordan and Japan, and is conducting a pilot program with the European Commission to test approaches to achieving mutual recognition and address differences in their respective programs. However, the specific details of how the participating counties’ customs officials will implement the mutual recognition arrangement— such as what benefits, if any, should be allotted to members of other countries’ C-TPAT like programs—have yet to be determined. As CBP goes forward, it may face challenges in defining the future of its CSI and C- TPAT programs and, more specifically, in managing the implementation of mutual recognition arrangements, including articulating and agreeing to the criteria for accepting another country’s program; the specific arrangements for implementation, including the sharing of information; and the actions for verification, enforcement; and, if necessary, termination of the arrangement. The Megaports Initiative, initiated by DOE’s National Nuclear Security Administration in 2003, represents another component in the efforts to prevent terrorists from smuggling WMD in cargo containers from overseas locations. The goal of this initiative is to enable foreign government personnel at key foreign seaports to use radiation detection equipment to screen shipping containers entering and leaving these ports, regardless of the containers’ destination, for nuclear and other radioactive material that could be used against the United States or its allies. DOE installs radiation detection equipment, such as radiation portal monitors and handheld radioactive isotope identification devices, at foreign seaports that is then operated by foreign government officials and port personnel working at these ports. Through August 2007, DOE had completed installation of radiation detection equipment at eight ports: Rotterdam, the Netherlands; Piraeus, Greece; Colombo, Sri Lanka; Algeciras, Spain; Singapore; Freeport, Bahamas; Manila, Philippines; and Antwerp, Belgium (Phase I). Operational testing is under way at four additional ports: Antwerp, Belgium (Phase II); Puerto Cortes, Honduras; Qasim, Pakistan; and Laem Chabang, Thailand. Additionally, DOE has signed agreements to begin work and is in various stages of implementation at ports in 12 other countries, including the United Kingdom, United Arab Emirates/Dubai, Oman, Israel, South Korea, China, Egypt, Jamaica, the Dominican Republic, Colombia, Panama, and Mexico, as well as Taiwan and Hong Kong. Several of these ports are also part of the Secure Freight Initiative, discussed in the next section. Further, in an effort to expand cooperation, DOE is engaged in negotiations with approximately 20 additional countries in Europe, Asia, the Middle East, and Latin America. DOE had made limited progress in gaining agreements to install radiation detection equipment at the highest priority seaports when we reported on this program in March 2005. Then, the agency had completed work at only two ports and signed agreements to initiate work at five others. We also noted that DOE’s cost projections for the program were uncertain, in part because they were based on DOE’s $15 million estimate for the average cost per port. This per port cost estimate may not be accurate because it was based primarily on DOE’s radiation detection assistance work at Russian land borders, airports, and seaports and did not account for the fact that the costs of installing equipment at individual ports vary and are influenced by factors such as a port’s size, physical layout, and existing infrastructure. Since our review, DOE has developed a strategic plan for the Megaports Initiative and revised it’s per port estimates to reflect port size, with per port estimates ranging from $2.6 million to $30.4 million. As we earlier reported, DOE faces several operational and technical challenges specific to installing and maintaining radiation detection equipment at foreign ports as the agency continues to implement its Megaports Initiative. These challenges include ensuring the ability to detect radioactive material, overcoming the physical layout of ports and cargo-stacking configurations, and sustaining equipment in port environments with high winds and sea spray. The SAFE Port Act required that a pilot program—known as the Secure Freight Initiative (SFI)—be conducted to determine the feasibility of 100 percent scanning of U.S. bound containers. To fulfill this requirement, CBP and DOE jointly announced the formation of SFI in December 2006, as an effort to build upon existing port security measures by enhancing the U.S. government’s ability to scan containers for nuclear and radiological materials overseas and better assess the risk of inbound containers. In essence, SFI builds upon the CSI and Megaports programs. The SAFE Port Act specified that new integrated scanning systems that couple nonintrusive imaging equipment and radiation detection equipment must be pilot-tested. It also required that, once fully implemented, the pilot integrated scanning system scan 100 percent of containers destined for the United States that are loaded at pilot program ports. According to agency officials, the initial phase of the initiative will involve the deployment of a combination of existing container scanning technology—such as X-ray and gamma ray scanners used by host nations at CSI ports to locate high-density objects that could be used to shield nuclear materials, inside containers—and radiation detection equipment. The ports chosen to receive this integrated technology are: Port Qasim in Pakistan, Puerto Cortes in Honduras, and Southampton in the United Kingdom. Four other ports located in Hong Kong, Singapore, the Republic of Korea, and Oman will receive more limited deployment of these technologies as part of the pilot program. According to CBP, containers from these ports will be scanned for radiation and other risk factors before they are allowed to depart for the United States. If the scanning systems indicate that there is a concern, both CSI personnel and host country officials will simultaneously receive an alert and the specific container will be inspected before that container continues to the United States. CBP officials will determine which containers are inspected, either on the scene locally or at CBP’s National Targeting Center. Per the SAFE Port Act, CBP is to report by April 2008 on, among other things, the lessons learned from the SFI pilot ports and the need for and the feasibility of expanding the system to other CSI ports. Every 6 months thereafter, CBP is to report on the status of full-scale deployment of the integrated scanning systems to scan all containers bound for the United States before their arrival. Recent legislative actions have updated U.S. maritime security requirements and may affect overall international maritime security strategy. In particular, the recently enacted Implementing Recommendations of the 9/11 Commission Act (9/11 Act) requires, by 2012, 100 percent scanning of U.S.-bound cargo containers using nonintrusive imaging equipment and radiation detection equipment at foreign seaports. The act also specifies conditions for potential extensions beyond 2012 if a seaport cannot meet that deadline. Additionally, it requires the Secretary of DHS to develop technological and operational standards for scanning systems used to conduct 100 percent scanning at foreign seaports. The Secretary also is required to ensure that actions taken under the act do not violate international trade obligations and are consistent with the WCO SAFE Framework. The 9/11 Act provision replaces the requirement of the SAFE Port Act that called for 100 percent scanning of cargo containers before their arrival in the United States, but required implementation as soon as possible rather than specifying a deadline. While we have not yet reviewed the implementation of the 100 percent scanning requirement, we have a number of preliminary observations based on field visits of foreign ports regarding potential challenges CBP may face in implementing this requirement: CBP may face challenges balancing new requirement with current international risk management approach. CBP may have difficulty requiring 100 percent scanning while also maintaining a risk- based security approach that has been developed with many of its international partners. Currently, under the CSI program, CBP uses automated targeting tools to identify containers that pose a risk for terrorism for further inspection before being placed on vessels bound for the United States. As we have previously reported, using risk management allows for reduction of risk against possible terrorist attack to the nation given resources allocated and is an approach that has been accepted governmentwide. Furthermore, many U.S. and international customs officials we have spoken to, including officials from the World Customs Organization, have stated that the 100 percent scanning requirement is contrary to the SAFE Framework developed and implemented by the international customs community, including CBP. The SAFE Framework, based on CSI and C-TPAT, calls for a risk management approach, whereas the 9/11 Act calls for the scanning of all containers regardless of risk. United States may not be able to reciprocate if other countries request it. The CSI program, whereby CBP officers are placed at foreign seaports to target cargo bound for the United States, is based on a series of bilateral, reciprocal agreements with foreign governments. These reciprocal agreements also allow foreign governments the opportunity to place customs officials at U.S. seaports and request inspection of cargo containers departing from the United States and bound for their home country. Currently, customs officials from certain countries are stationed at domestic seaports and agency officials have told us that CBP has inspected 100 percent of containers that these officials have requested for inspection. According to CBP officials, the SFI pilot, as an extension of the CSI program, allows foreign officials to ask the United States to reciprocate and scan 100 percent of cargo containers bound for those countries. Although the act establishing the 100 percent scanning requirement does not mention reciprocity, CBP officials have told us that the agency does not have the capacity to reciprocate should it be requested to do so, as other government officials have indicated they might when this provision of the 9/11 Act is in place. Logistical feasibility is unknown and may vary by port. Many ports may lack the space necessary to install additional equipment needed to comply with the requirement to scan 100 percent of U.S. bound containers. Additionally, we observed that scanning equipment at some seaports is located several miles away from where cargo containers are stored, which may make it time consuming and costly to transport these containers for scanning. Similarly, some seaports are configured in such a way that there are no natural bottlenecks that would allow for equipment to be placed such that all outgoing containers can be scanned and the potential to allow containers to slip by without scanning may be possible. Transshipment cargo containers—containers moved from one vessel to another—are only available for scanning for a short period of time and may be difficult to access. Similarly, it may be difficult to scan cargo containers that remain on board a vessel as it passes through a foreign seaport. CBP officials told us that currently containers such as these that are designated as high-risk at CSI ports are not scanned unless specific threat information is available regarding the cargo in that particular container. Technological maturity is unknown. Integrated scanning technologies to test the feasibility of scanning 100 percent of U.S. bound cargo containers are not yet operational at all seaports participating in the pilot program, known as SFI. The SAFE Port Act requires CBP to produce a report regarding the program, which will include an evaluation of the effectiveness of scanning equipment at the SFI ports. However, this report will not be due until April 2008. Moreover, agency officials have stated that the amount of bandwidth necessary to transmit scanning equipment outputs to CBP officers for review exceeds what is currently feasible and that the electronic infrastructure necessary to transmit these outputs may be limited at some foreign seaports. Additionally, there are currently no international standards for the technical capabilities of inspection equipment. Agency officials have stated that CBP is not a standard setting organization and has limited authority to implement standards for sovereign foreign governments. Resource responsibilities have not been determined. The 9/11 Act does not specify who would pay for additional scanning equipment, personnel, computer systems, or infrastructure necessary to establish 100 percent scanning of U.S. bound cargo containers at foreign ports. According to the Congressional Budget Office (CBO) in its analysis of estimates for implementing this requirement, this provision would neither require nor prohibit the U.S. federal government from bearing the cost of conducting scans. For the purposes of its analysis, CBO assumed that the cost of acquiring, installing, and maintaining systems necessary to comply with the 100 percent scanning requirement would be borne by foreign ports to maintain trade with the United States. However, foreign government officials we have spoken to expressed concerns regarding the cost of equipment. They also stated that the process for procuring scanning equipment may take years and can be difficult when trying to comply with changing U.S. requirements. These officials also expressed concern regarding the cost of additional personnel necessary to: (1) operate new scanning equipment; (2) view scanned images and transmit them to the United States; and (3) resolve false alarms. An official from one country with whom we met told us that, while his country does not scan 100 percent of exports, modernizing its customs service to focus more on exports required a 50 percent increase in personnel, and other countries trying to implement the 100 percent scanning requirement would likely have to increase the size of their customs administrations by at least as much. Use and ownership of data have not been determined. The 9/11 Act does not specify who will be responsible for managing the data collected through 100 percent scanning of U.S.-bound containers at foreign seaports. However, the SAFE Port Act specifies that scanning equipment outputs from SFI will be available for review by U.S. government officials either at the foreign seaport or in the United States. It is not clear who would be responsible for collecting, maintaining, disseminating, viewing or analyzing scanning equipment outputs under the new requirement. Other questions to be resolved include ownership of data, how proprietary information would be treated, and how privacy concerns would be addressed. CBP officials have indicated they are aware that challenges exist. They also stated that the SFI will allow the agency to determine whether these challenges can be overcome. According to senior officials from CBP and international organizations we contacted, 100 percent scanning of containers may divert resources, causing containers that are truly high risk to not receive adequate scrutiny due to the sheer volume of scanning outputs that must be analyzed. These officials also expressed concerns that 100 percent scanning of U.S.-bound containers could hinder trade, leading to long lines and burdens on staff responsible for viewing images. However, given that the SFI pilot program has only recently begun, it is too soon to determine how the 100 percent scanning requirement will be implemented and its overall impact on security. We provided a draft of this testimony to DHS agencies and incorporated technical comments as appropriate. Mr. Chairman and members of the committee, this completes my prepared statement. I will be happy to respond to any questions that you or other members of the committee have at this time. For information about this testimony, please contact Stephen L. Caldwell, Director, Homeland Security and Justice Issues, at (202) 512-9610, or caldwells@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Richard Ascarate, Jonathan Bachman, Jason Bair, Fredrick Berry, Christine Broderick, Stockton Butler, Steven Calvo, Frances Cook, Christopher Currie, Anthony DeFrank, Wayne Ekblad, Christine Fossett, Nkenge Gibson, Geoffrey Hamilton, Christopher Hatscher, Valerie Kasindi, Monica Kelly, Ryan Lambert, Nicholas Larson, Daniel Klabunde, Matthew Lee, Gary Malavenda, Robert Rivas, Leslie Sarapu, James Shafer, and April Thompson. Combating Nuclear Smuggling: Additional Actions Needed to Ensure Adequate Testing of Next Generation of Radiation Detection Equipment. GAO-07-1247T. Washington, D.C.: September 18, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-1240T. Washington, D.C.: September 18, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-1081T. Washington, D.C.: September 6, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-454. Washington, D.C.: August 17, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-1142T. Washington, D.C.: July 31, 2007. Information on Port Security in the Caribbean Basin. GAO-07-804R. Washington, D.C.: June 29, 2007. Department of Homeland Security: Science and Technology Directorate’s Expenditure Plan. GAO-07-868. Washington, D.C.: June 22, 2007. Homeland Security: Guidance from Operations Directorate Will Enhance Collaboration among Departmental Operations Centers. GAO-07-683T. Washington, D.C.: June 20, 2007. Department of Homeland Security: Progress and Challenges in Implementing the Department’s Acquisition Oversight Plan. GAO-07-900. Washington, D.C.: June 13, 2007. Department of Homeland Security: Ongoing Challenges in Creating an Effective Acquisition Organization. GAO-07-948T. Washington, D.C.: June 7, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-835T. Washington, D.C.: May 15, 2007. Homeland Security: Management and Programmatic Challenges Facing the Department of Homeland Security. GAO-07-833T. Washington, D.C.: May 10, 2007. Maritime Security: Observations on Selected Aspects of the SAFE Port Act. GAO-07-754T. April 26, 2007. Transportation Security: DHS Efforts to Eliminate Redundant Background Check Investigations. GAO-07-756. Washington, D.C.: April 26, 2007. International Trade: Persistent Weaknesses in the In-Bond Cargo System Impede Customs and Border Protection’s Ability to Address Revenue, Trade, and Security Concerns. GAO-07-561. Washington, D.C.: April 17, 2007. Transportation Security: TSA Has Made Progress in Implementing the Transportation Worker Identification Credential Program, but Challenges Remain. GAO-07-681T. Washington, D.C.: April 12, 2007. Customs Revenue: Customs and Border Protection Needs to Improve Workforce Planning and Accountability. GAO-07-529. Washington, D.C.: April 12, 2007. Port Risk Management: Additional Federal Guidance Would Aid Ports in Disaster Planning and Recovery. GAO-07-412. Washington, D.C.: March 28, 2007. Transportation Security: DHS Should Address Key Challenges before Implementing the Transportation Worker Identification Credential Program. GAO-06-982. Washington, D.C.: September 29, 2006. Maritime Security: Information-Sharing Efforts Are Improving. GAO-06-933T. Washington, D.C.: July 10, 2006. Cargo Container Inspections: Preliminary Observations on the Status of Efforts to Improve the Automated Targeting System. GAO-06-591T. Washington, D.C.: March 30, 2006. Managing for Results: Enhancing Agency Use of Performance Information for Management Decision Making. GAO-05-927. Washington, D.C.: September 9, 2005. Combating Nuclear Smuggling: Efforts to Deploy Radiation Detection Equipment in the United States and in Other Countries. GAO-05-840T. Washington, D.C.: June 21, 2005. Container Security: A Flexible Staffing Model and Minimum Equipment Requirements Would Improve Overseas Targeting and Inspection Efforts. GAO-05-557. Washington, D.C.: April 26, 2005. Homeland Security: Key Cargo Security Programs Can Be Improved. GAO-05-466T. Washington, D.C.: May 26, 2005. Maritime Security: Enhancements Made, But Implementation and Sustainability Remain Key Challenges. GAO-05-448T. Washington, D.C.: May 17, 2005. Cargo Security: Partnership Program Grants Importers Reduced Scrutiny with Limited Assurance of Improved Security. GAO-05-404. Washington, D.C.: March 11, 2005. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C.: April 15, 2005. Preventing Nuclear Smuggling: DOE Has Made Limited Progress in Installing Radiation Detection Equipment at Highest Priority Foreign Seaports. GAO-05-375. Washington, D.C.: March 30, 2005. Protection of Chemical and Water Infrastructure: Federal Requirements, Actions of Selected Facilities, and Remaining Challenges. GAO-05-327. Washington, D.C.: March 2005. Homeland Security: Process for Reporting Lessons Learned from Seaport Exercises Needs Further Attention. GAO-05-170. Washington, D.C.: January 14, 2005. Port Security: Better Planning Needed to Develop and Operate Maritime Worker Identification Card Program. GAO-05-106. Washington, D.C.: December 2004. Maritime Security: Substantial Work Remains to Translate New Planning Requirements into Effective Port Security. GAO-04-838. Washington, D.C.: June 2004. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Washington, D.C.: March 31, 2004. Container Security: Expansion of Key Customs Programs Will Require Greater Attention to Critical Success Factors. GAO-03-770. Washington, D.C.: July 25, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Because the safety and economic security of the United States depend in substantial part on the security of its 361 seaports, the United States has a vital national interest in maritime security. The Security and Accountability for Every Port Act (SAFE Port Act), modified existing legislation and created and codified new programs related to maritime security. The Department of Homeland Security (DHS) and its U.S Coast Guard, Transportation Security Agency, and U.S. Customs and Border Protection have key maritime security responsibilities. This testimony synthesizes the results of GAO's completed work and preliminary observations from GAO's ongoing work pertaining to overall port security, security at individual facilities, and cargo container security. To perform this work GAO visited domestic and overseas ports; reviewed agency program documents, port security plans, and post-exercise reports; and interviewed officials from the federal, state, local, private, and international sectors. Federal agencies have improved overall port security efforts by establishing committees to share information with local port stakeholders, and taking steps to establish interagency operations centers to monitor port activities, conducting operations such as harbor patrols and vessel escorts, writing port-level plans to prevent and respond to terrorist attacks, testing such plans through exercises, and assessing the security at foreign ports. However, these agencies face resource constraints and other challenges trying to meet the SAFE Port Act's requirements to expand these activities. For example, the Coast Guard faces budget constraints in trying to expand its current command centers and include other agencies at the centers. Similarly, private facilities and federal agencies have taken action to improve the security at approximately 3,000 individual facilities by writing facility-specific security plans, and inspecting facilities to make sure they are complying with their plans, and developing special identification cards for workers to prevent terrorist from getting access to secure areas. Again, federal agencies face challenges trying to meet the act's requirements to expand the scope or speed the implementation of such activities. For example, the Transportation Security Agency missed the act's July 2007 deadline to implement the identification card program at 10 selected ports because of delays in testing equipment and procedures. Federal programs related to the security of cargo containers have also improved as agencies are enhancing systems to identify high-risk cargo, expanding partnerships with other countries to screen containers before they depart for the United States, and working with international organizations to develop a global framework for container security. Federal agencies face challenges implementing container security aspects of the SAFE Port Act and other legislation. For example, Customs and Border Protection must test and implement a new program to screen 100 percent of all incoming containers overseas--a departure from its existing risk-based programs. |
Section 884 of the American Jobs Creation Act of 2004 changed the rules for the amount taxpayers could claim as a deduction on their tax returns for donating a qualified vehicle to a charity. The new rules were effective for all vehicle donations made after December 31, 2004. Prior to this change, eligible taxpayers could claim up to the fair market value of the donated vehicle as a deduction on their tax returns. For any vehicles donated to a charity on January 1, 2005, or later with a claimed value that exceeds $500, taxpayers can only claim the lesser of the vehicle’s fair market value or gross proceeds of the sale as a deduction on their tax returns unless the charity’s intended use of the donated vehicle meets one of the three exceptions to the gross proceeds of the sale rule. If one of the exceptions is met, the taxpayer may be eligible to claim the fair market value of the vehicle as a deduction. The three exceptions are the charity intends to make a significant intervening use of the vehicle; the charity intends to make a material improvement to the vehicle; or the charity intends to give or sell the vehicle to a needy individual at a price significantly below fair market value in direct furtherance of the charity’s charitable purpose. If the charity sells the vehicle for $500 or less and the exceptions do not apply, the taxpayer can deduct the lesser of $500 or the vehicle’s fair market value on the date of the contribution. Table 1 summarizes the amount donors could deduct for charitable contributions of vehicles before and after the changes in the rules for such deductions. The vehicle donation process generally consists of six steps: (1) solicitation/donor contact, (2) vehicle pickup, (3) vehicle sale, (4) distribution of proceeds, (5) charity provides donor with written acknowledgment, and (6) charity and donor file required forms with IRS. The vehicle donation process is depicted in figure 1. Step 1 – Solicitation/donor contact. The vehicle donation process generally begins with solicitations for donated vehicles through advertisements. Vehicle donations may be solicited directly by charities, third-party agents, or both, depending on the agreement between the charities and third-party agents. Vehicle donations are solicited through advertisements on the radio, in newspapers, on the Internet, on truck banners, on television, and on billboards. Also during this step, donors initiate contact with the charity and or third-party agent to donate their vehicles. Either a charity or third-party agent may take the initial call from a potential donor, asking the donor questions that may be used to screen vehicles, such as the vehicle’s make, year, and condition and if the donor has the title to the vehicle. Step 2 – Vehicle pickup. After the donor makes the initial call to donate a vehicle, arrangements are made to pick up the vehicle and deliver it to wherever it will be stored until it is sold. Once vehicles are picked up, the charity or third-party agent also obtains the title of the vehicle from the donor. Step 3 – Vehicle sale. Once collected, donated vehicles are most often sold. Charities or third-party agents typically sell donated vehicles through auctions to auto dealers, to the public, or to vehicle salvagers. Step 4 – Distribution of proceeds. After vehicles have been liquidated, the proceeds are distributed. Charities with in-house vehicle donation programs keep proceeds that remain after deducting costs associated with processing the vehicles. When charities use third-party agents, the financial agreement between the charity and the third-party agent dictates the proceeds that the charity and fund-raiser will receive from the sale. Step 5 – Charity provides donor with written acknowledgment. Charities are required to provide a contemporaneous written acknowledgment to the taxpayer for any contribution of a vehicle with a claimed value that exceeds $500. The charity can either create its own acknowledgment, or it can use Copy B of Form 1098-C (Contributions of Motor Vehicles, Boats, and Airplanes) as the contemporaneous written acknowledgment. For details about the information that must be included in the written acknowledgment, including contributions with a claimed value of $500 or less, see appendix IV. Step 6 – Charity and donor filing requirements. The charity must file a Form 1098-C if a donor contributes a qualified vehicle to a charity with a claimed value of more than $500. Charities are not to file Form 1098-C for contributions of qualified vehicles with a claimed value of $500 or less. Donors must attach Copy B of Form 1098-C, or a copy of the acknowledgment if the charity does not use Copy B for this purpose, to their returns if they are claiming a deduction of more than $500. A donor must also file Form 8283 (Noncash Charitable Contributions) if the deduction he/she is claiming for a donated vehicle is greater than $500 and attach it to the Form 1040 (U.S. Individual Income Tax Return). For more information on the filing requirements for charities and donors, see appendix V. We contacted 58 charities during August 2007 and found that almost all of them still operated their vehicle donation programs. We conducted in- depth interviews with officials from 10 of these charities and learned that changes in the number of donated vehicles did not appear to correspond with changes in the quality of vehicles donated or with changes in overall fund-raising. Also, some charities have developed innovative ideas to increase revenue from vehicle donations. Almost all of the charities contacted still operated their vehicle donation programs as of August 2007. In 2003, we interviewed officials from 65 charities when collecting data for our previous report. We attempted to contact all of these charities and successfully reached 58 of them. Through these contacts, we learned that all but 5 of the 58 charities still operated vehicle donation programs. During these contacts, we asked about changes in the numbers of vehicles being donated before and after the rule changes. Not all of the charities in our screening interviews provided information about the number of donated vehicles; 30 charities gave us this information and 21 of them said that they had seen a decrease, and the rest said that they saw increases or no change. Some of these changes were large, while others were fairly small. We conducted an in-depth interview with an official from one of the five charities that no longer operates a vehicle donation program. According to the official, charity managers decided to discontinue the vehicle donation program before the rules were changed, choosing instead to focus their fund-raising activities on a large fund-raising campaign that lasted for a few years. The official said that by reallocating the resources that had been devoted to the vehicle program, the charity could raise more money by focusing on obtaining large donations from selected donors for its fund- raising campaign. At the remaining four charities that no longer had vehicle donation programs, officials told us either that the employees who used to operate the vehicle donation program no longer worked for the charity and no other program officials could speak about the decision to discontinue the program or that the charity has not received any vehicles for the past 2 to 3 years. For the 10 charities covered in our in-depth interviews, 6 reported decreases from 2003 to 2006 in the number of vehicles donated, 3 reported increases and 1 did not provide data, as shown in table 2; however, the latter reported that the number of donated vehicles is about the same since the rule change. One charity’s donations declined by over 37,000 vehicles, while others realized much smaller decreases ranging from about 350 to 600 vehicles when comparing 2003 and 2006 data. However, when comparing years 2005 to 2006, 4 of the 5 charities that reported using one of the exceptions to the gross proceeds of sale rule reported an increase in the number of vehicles donated. It is important to note that not all of the vehicles donated to a charity that can use one of the exceptions to the gross proceeds of sale rule are eligible for the exception. For example, a charity that gives donated vehicles to needy individuals in direct furtherance of its charitable purpose may sell some of the donated vehicles it receives. The vehicles given to needy individuals are eligible for the exception, but the vehicles that are sold to non-needy individuals are not. All of the six officials who reported a decrease in the number of vehicles donated to their charities attributed this decrease, at least in part, to the change in the rules. Some charity officials noted other factors that also may have affected the number of vehicles donated. For example, officials at two charities noted that there is more competition in the marketplace for donated vehicles. An official at another charity noted that vehicle donations often follow the trends in new vehicle sales, and if people are not buying as many new vehicles, they are less likely to donate vehicles. An official at a different charity said that she could not attribute reduced donations fully to the rule changes because the charity had decreased its advertising of the vehicle donation program over the past few years. Charity experiences with the quality of donated vehicles also varied. Three of the 10 charities reported an increase in quality, 3 charities reported a decrease in quality, and 4 charities reported no change in the quality of vehicles donated. One of the charities that uses one of the exceptions to the gross proceeds of sale rule reported that the quality of donated vehicles generally increased and more of these vehicles could be refurbished and given to needy individuals. The official said that in 2002 the charity was only able to refurbish about 75 vehicles to give to clients while the charity gave away about 200 vehicles to clients in 2006. An official at another charity noted that the quality of the vehicles may be related to the demographics of the areas where the charity operates. In one area of the state, the charity tends to receive higher-quality vehicles because many of the area residents are retirees who are financially comfortable and able to donate their old vehicles, which are still in good condition, while donors in less-affluent parts of the state tend to donate vehicles that need to go straight to salvage yards. Officials from eight charities said that they will take any vehicle in any shape, although some charities stipulate that the cost to tow the vehicle cannot be more than the value of the vehicle. Some of those charities sell the vehicles to salvage yards for the price of the metal or to salvage yards that pay a flat fee to the charity for the vehicles. Officials we interviewed from 6 of the 10 charities reported a decrease in vehicle donation revenue from 2003 to 2006, 3 reported an increase, and 1 did not provide data but reported that the rule changes had no effect on vehicle donations or revenue. We did not find a consistent pattern when comparing the number of vehicles donated with the revenue from the vehicle donation program or the charity’s overall revenue. For example, 3 of the 6 charities that reported decreases in revenue from vehicle donation programs from 2003 to 2006 also reported a decrease in total revenue while the other 3 reported increases in total revenue. For 1 of these charities, the number of vehicles donated increased, while the revenue from those vehicles and total revenue decreased. In another case, a charity reported a decrease in the number of vehicles donated but increases in both the revenue from the vehicle donation program and total revenue. Some of the charity officials we interviewed said that their organizations changed their fund-raising activities to offset the loss in revenue from their vehicle donation programs. For example, one charity started to increase the number of special events, such as tennis and golf tournaments and fund-raising dinners. In 2006, this charity held over 35 special events, which raised $1.2 million dollars. Another charity reported an increase in grant revenue to offset the loss in vehicle donation revenue but added that grant revenue is often earmarked for specific programs and activities unlike vehicle donation revenue, which can be used for general program administration. Five of the 10 charities we interviewed reduced services or made other changes in their programs because of the loss in revenue from their vehicle donation programs. For example, 1 charity curtailed some services, such as decreasing hours of operations at some homeless shelters. Previously, this charity operated a 24-hour facility targeted at single men and women, but now the charity only operates the facility in the evenings. This same charity instituted a hiring freeze and has postponed or canceled staff merit pay increases because of the loss in revenue. Another charity reported closing some local offices and reducing its staff. However, 2 charities noted that the total revenue from their vehicle donation programs was a small percentage of their overall budgets. As such, 1 of these charities said that the decline in vehicle donation revenue did not have much of an impact on its ability to provide services. In order to offset decreased revenue from the vehicle donation programs, some charities have changed their vehicle donation business operations. Examples of changes include using minimum bids, selling vehicles online, and selling vehicles directly to the public. One charity started placing a minimum bid amount on the vehicles sold at auction to help secure a higher selling price. The charity takes the chance that the vehicles will not sell and it will have to reclaim them at the end of the auction. This practice helps ensure that the charity will receive the minimum sale amount for each vehicle at some point. The charity also sends a representative to the auction to oversee the vehicle sales. The charity official said that his organization found that it got better prices when it directly oversaw the auction than when it left the whole process to the auction house. One charity that provides services to needy persons has sold high-end vehicles on online auction sites such as eBay rather than giving them to needy individuals. Since high-end vehicles tend to have higher upkeep prices, the charity was concerned that the needy families would not be able to fix the vehicles if they broke down. Instead, the charity sells these vehicles at an online auction and uses the money in its program. This allows the charity to obtain more revenue than at a wholesale auction, since the general public is bidding on the vehicle, not just wholesale buyers. One of the charities we interviewed said that in response to the rule changes, it began operating a used car lot at the end of 2005. As a result, its revenue from vehicle donations doubled in 2006 over that of 2005, according to the charity official. All of the charities said that administrative burdens have increased; however, some charity officials noted that they were able to accommodate the increase. The charity officials said that they had additional reporting requirements to contend with and that completing and filing the Form 1098-C was time consuming. For example, an official at one charity stated, “it takes time to fill out the Forms 1098-C and to prepare the acknowledgment that is sent to the donors. We have to keep track of each sale in order to provide the sale information to the donor. The donor cannot claim a deduction until the vehicle is sold.” The official also noted that although it was a burden, the charity has been able to handle the increase in paperwork. An official at another charity noted that the money spent on donor mailings has increased because the charity sends out a donor package with explanations of how the vehicle will be used and the acknowledgment for the donation and then must also send the Form 1098- C to the donor. According to the official, the paperwork has quadrupled but the charity can handle it. An official at a different charity stated that although it now has to file the Form 1098-C, the charity has also become more efficient at using technology and can therefore handle the increase in paperwork. Six of the charities we interviewed reported difficulties with obtaining Social Security numbers from donors, while some of the others experienced few or no problems. Two charities noted that if donors do not want to provide their Social Security numbers, then box 7a on the Form 1098-C is checked and the donors cannot claim more than $500 for the vehicle donation. Two charities said that they explain why they need the donors’ Social Security numbers in the letters sent to donors. Four charities found that donors were nervous about providing their Social Security numbers, possibly for fear of identity theft. Six of the charities we interviewed noted that IRS’s guidance, forms, and publications are generally clear and user-friendly. In a few cases, charity officials were confused about some guidance. For example, one charity did not know the correct timing for sending the acknowledgments to the donors, questioning if it was 30 days from receipt of the vehicle or 30 days from when they evaluated the vehicle. Five charities noted that donors were asking more questions about the vehicle donation rule changes. One official noted that for about the first 6 months, some donors did not know that the rules changed but now most donors understand the new rules. Officials at the remaining 5 charities said that the number of questions from donors remained the same or decreased. One official noted that most of the donors were doing their own research before deciding to donate a vehicle. The Acting Commissioner of Internal Revenue was provided a draft of this report for her review and comment. IRS provided technical comments, and we incorporated them as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies of this report to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-9110 or brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were David Lewis, Assistant Director; Charlesetta Bailey; Amy Bowser; Laurie Ellington; Michele Fejfar; Robyn Howard; and Shellee Soliday. The Internal Revenue Service (IRS) took several steps to implement the new rules for claiming charitable deductions for vehicles, including providing guidance to both charities and donors. After the enactment of the new rules in October 2004, and before the new rules became effective on January 1, 2005, IRS issued a news release in November 2004 that explained the new rules. This news release explained that the charity must provide donors acknowledgment of their donations, and that if the claimed value of the donated vehicle exceeds $500 and the vehicle is sold by the charity, donors are limited to claiming the gross proceeds of the sale except in certain situations. During 2005 and 2006, IRS issued additional news releases, notices, and publications related to charitable contributions of vehicles including the following: Issued a notice in June 2005 that described the new rules, including that the deduction is limited to the gross proceeds of the sale with some exceptions; what the exceptions are and that when an exception applies, the donor may claim the fair market value of the vehicle; what information must be included in the acknowledgment charities send to donors; and what charities must report to IRS. Issued a news release in December 2005 reminding taxpayers that they must obtain a charity’s written acknowledgment of their vehicle donation and that if the deduction is for more than $500, they must attach the acknowledgment to their tax return. Issued a notice in January 2006 that describes the information reporting requirements for charities. Issued a revised publication for donors in February 2006 that describes how much they can deduct, how their deduction is generally limited to the gross proceeds of the sale unless an exception applies, what the exceptions are, how to determine fair market value if they are entitled to claim fair market value, when an appraisal is required, and what documents must be attached to their returns. Issued a revised publication for charities in May 2006 that describes charities’ responsibilities in relation to vehicle donations, including what must be included in an acknowledgment and when it must be sent to the donor, guidance about using the exceptions to the gross proceeds of the sale rule, and what information must be reported to IRS and when it must be provided. Issued a notice in November 2006 that provided guidance related to the requirement for appraisals of noncash charitable contributions. An appraisal is required if the taxpayer is claiming more than $5,000 for a donated vehicle. Besides issuing these documents that specifically relate to charitable contributions of vehicles, IRS also included information about vehicle donations in other publications. In addition to news releases, notices, and publications, IRS also created Form 1098-C (Contributions of Motor Vehicles, Boats, and Airplanes) to be used by charities to report each contribution of a qualified vehicle with a claimed value of more than $500. Charities must send a written acknowledgment of vehicle donations to donors, and they can use Form 1098-C for this purpose. The form includes all the information required by law to be included in the written acknowledgment. Form 1098-C has four copies—Copy A, which the charity files with IRS; Copy B, which the charity sends to the donor and the donor attaches to his or her return; Copy C, which the charity sends to the donor and the donor keeps for his or her records, and Copy D, which the charity keeps for its records. The Treasury Inspector General for Tax Administration (TIGTA) agreed in its September 2007 report that IRS took several steps to implement the new rules and that IRS properly updated tax forms and publications, provided training and information to employees to facilitate the implementation of the requirement, and added a link on the Large and Mid- Sized Business Division’s Web site to guidance related to vehicle donations. However, TIGTA also found that in 80 percent of the sample cases reviewed, taxpayers, preparers, or both did not prepare/file the required forms for claiming the deduction. TIGTA recommended that IRS develop a comprehensive outreach plan for taxpayers and preparers on the reporting requirements for vehicle donations. IRS management disagreed with this recommendation because they believe the actions IRS was taking in response to a similar recommendation in an earlier TIGTA report, related to noncash contributions that did not cover vehicle donations, would address the new reporting requirement for vehicle donations. TIGTA disagreed with IRS management’s response because the deficient returns included in TIGTA’s sample were filed after IRS provided outreach to the public. TIGTA agreed that taking additional actions to publicize reporting requirements for noncash contributions may address the new reporting requirements for vehicle deductions but added that the outreach efforts must specifically emphasize that the documentation requirements for donated vehicles are different than those for other noncash contributions. Currently, IRS’s proposed outreach plan does not emphasize vehicle donations; however, according to a Wage and Investment Policy Analyst, IRS intends to include information in this outreach effort that will address vehicle donations. Internal Revenue Services’ Tax Exempt and Government Entities Division (TE/GE) is responsible for determining if charities are complying with the vehicle donation tax rules and the Wage and Investment Division (W&I) and the Small Business/Self Employed Division (SB/SE) are responsible for determining if donors are complying with the vehicle donation tax rules. Officials in these divisions said that they currently are not concentrating on charity or donor compliance with the vehicle donation rules in part because of competing priorities for compliance resources. However, if a charity or donor is selected for an examination, one of the items that could be reviewed is compliance with the vehicle donation rules. The Exempt Organizations Examinations Office staff in TE/GE, which determines if charities are complying with tax rules, used input from its Strategic Planning Work Group, which brainstorms ideas to determine if issues should be addressed, and information obtained from Congress, IRS staff, TIGTA reports, and the media to prioritize its work. Using this information and because of limited compliance resources, officials of that office said that they determined that issues other than vehicle donations had a higher priority. In addition, in July 2007, the TE/GE Commissioner testified that IRS is focusing on noncash contributions, but the problems are greatest for noncash contributions for which there is no ready market. Based on our interviews with officials from 10 charities that operate vehicle donation programs, there is a ready market for donated vehicles. W&I officials said that they decided not to focus compliance resources on donated vehicles after conducting correspondence examinations in 2005 of a sample of 204 tax returns that included deductions for donated vehicles and closing 88 percent of the examinations with no changes. The average tax change made to the remaining returns was $382. They concluded that the high rate of no change cases and the low audit results in terms of dollars indicated that vehicle donations were not conducive for correspondence examinations. In addition, according to the W&I Automated Under Reporter (AUR) Program Chief, IRS has not done any studies to determine if there would be a high return on investment from instituting automated document matching related to vehicle donations. An SB/SE Division Program official said that when identifying special compliance initiatives, for example in vehicle donations, IRS considers the amount of possible increased tax revenue that could be realized from an examination in comparison to the resources spent on the examination. Currently, IRS is not developing a special compliance initiative for vehicle donations. However, in response to a TIGTA recommendation, IRS is adding an audit indicator to returns claiming charitable deductions for donated vehicles over a specific dollar threshold that did not have Form 8283 (Noncash Charitable Contributions) attached. Thus, according to the SB/SE official, if a tax return is selected for an examination for other reasons, the audit indicator will let examination staff know that they should also look at the vehicle donation. Besides recommending that IRS add an audit indicator to certain returns with vehicle donations, TIGTA also recommended in its September 2007 report on charitable donations of vehicles that IRS lower the dollar threshold for reviewing returns with unsubstantiated deductions for donated vehicles and correspond with taxpayers to obtain missing documentation. IRS management responded that they consider vehicles a small subset of the overall population of noncash charitable contributions, they will not lower the threshold, and they will continue to correspond with taxpayers who do not provide documentation if their noncash contribution is over a specific dollar threshold. They also said that they are exploring alternative ways to address lower-dollar vehicle donation compliance issues. According to an SB/SE senior program analyst involved with exam policy, IRS is revising Form 8453 (U.S. Individual Income Tax Declaration for an IRS e-file Return). This form will be used to transmit supporting paper documents that are required to be submitted to IRS by taxpayers who are filing electronically. The Form 8453 will include a list of paper forms that can be submitted with Form 8453. These will include Form 1098-C and Form 8283. This should remind filers that they must file these forms with IRS. TIGTA disagreed with IRS’s policy of treating vehicles as a small subset of the overall population of noncash contributions. In TIGTA’s opinion, since Congress specifically provided substantiation levels for vehicles that were different than the requirements for other noncash contributions, to ignore this when administering the tax law is not in keeping with Congress’s intent. Our objective was to determine how charities have been affected by the 2005 changes to the amounts donors can claim on their tax returns for donated vehicles. To address this objective, we reviewed the new vehicle donation rules that became effective January 1, 2005, and compared them to the previous vehicle donation rules. In addition, we contacted 58 of the 65 charities we interviewed for our November 2003 report about the vehicle donation process, and determined which of these charities still operated a vehicle donation program as of August 2007 and if they used a contractor to operate the program. In our screening contacts, we also asked about changes in the number of donated vehicles in 2006 or 2007 in comparison to 2002. About half of the charities—30 out of the 58— provided a response to this question, and 21 of them said they had seen decreases and the rest said they had seen increases or no change in numbers of donated vehicles. We selected 10 of the 58 charities that still operated programs and 1 that no longer operated a program and interviewed officials to obtain their views about how the changes affected the number and quality of donated vehicles, vehicle donation revenues, their vehicle donation programs, and administrative burden. To ensure that we obtained the views of officials who operated a wide variety of vehicle donation programs, we selected charities that did and did not use the exceptions to the gross proceeds of sale rule and charities that used or did not use a contractor to operate their vehicle donation programs. The charities we selected also reflected changes in the numbers of donated vehicles from 2002 to 2006 or 2007 in similar proportion to what we found in our screening contacts. The information provided by charity officials is anecdotal and cannot be generalized to other charities that operate vehicle donation programs. We did not independently verify the information provided by the charity officials. In addition, to provide information on the steps IRS took to implement these changes and to ensure charity and donor compliance, we analyzed documents and interviewed officials. To provide information about implementation, we reviewed the guidance IRS issued to implement the changes, including news releases, notices, publications, and forms and instructions and TIGTA’s evaluation of IRS’s implementation of the changes in the rules. To provide information about IRS’s efforts to ensure compliance, we analyzed TE/GE, W&I, SB/SE, and LMSB documents related to charitable contributions of vehicles and other noncash contributions, including TE/GE’s implementing guidelines for its annual work plans for fiscal years 2005 through 2007, a W&I analysis of a sample of taxpayers who claimed a deduction for a vehicle donation to determine if it should conduct correspondence examinations of vehicle donations, SB/SE’s analysis to determine if it should lower the threshold for examining returns that claim deductions for donated vehicles, and LMSB’s proposed outreach plan for providing information about noncash contributions. We also interviewed officials from those divisions about compliance issues, and interviewed TIGTA officials and reviewed the results of TIGTA’s evaluation of IRS’s controls over the processing of deductions for donated vehicles reported on individual tax returns. We performed our work from June 2007 through January 2008 in accordance with generally accepted government auditing standards. As shown in table 3, the information charities must include in the written acknowledgment they send to donors varies depending on the value of the donated vehicle. If a charity intends to (1) make a significant intervening use of the vehicle, (2) make a material improvement to the vehicle, or (3) give or sell the vehicle to a needy individual, the charity must include additional information in the acknowledgment in addition to the information included in table 3. For a significant intervening use, the letter must include a statement certifying that the charity intends to make a significant intervening use of the donated vehicle, a detailed statement of the intended use, a detailed statement of the duration of that use, and a certification that the vehicle will not be sold before completion of the use. For a material improvement, the letter must include a statement that the charity intends to make a material improvement to the a detailed description of the intended material improvement, and a certification that the vehicle will not be sold before completion of the improvement. For giving or selling a vehicle to a needy individual, the letter must include a certification that the charity intends to give or sell the vehicle to a needy individual at a price significantly below fair market value and the gift or sale is in direct furtherance of the charity’s charitable purpose of relieving the poor and distressed or the underprivileged who are in need of a means of transportation. If a donor contributes a qualified vehicle to a charity with a claimed value of more than $500, the charity must file a Form 1098-C (Contributions of Motor Vehicles, Boats, and Airplanes) for that vehicle. This form has four parts: Copy A which the charity files with IRS; Copy B which the charity sends to the donor and the donor is to attach it to his/her tax return; Copy C which the charity sends to the donor for his/her records; and Copy D which the charity retains for its records. Charities can use Copy B as the contemporaneous written acknowledgment that charities must send to donors. Charities are not to file Form 1098-C for contributions of qualified vehicles with a claimed value of not more than $500; however, they may provide the donor with Copy C as the acknowledgment. Donors must attach Copy B, or a copy of the acknowledgment if the charity does not use Copy B for this purpose, to their returns if they are claiming a deduction of more than $500. In addition, if a charity receives a donated vehicle that has a claimed value greater than $5,000 and sells or otherwise disposes of the vehicle within 3 years after the donation, the charity must file Form 8282 (Donee Information Return) within 125 days of the disposition. The charity must also provide a copy of the filed Form 8282 to the original donor. Besides filing the acknowledgment, donors must also complete Section A of Form 8283 (Noncash Charitable Contributions) if the deduction they are claiming for a donated vehicle is greater than $500, but not more than $5,000, and attach it to Form 1040 (U.S. Individual Income Tax Return). If the deduction the donor is claiming is greater than $5,000, he/she must complete Section B of the Form 8283, which must include the signature of an authorized official of the charity, and attach it to the return. In addition, if the deduction is over $5,000 and the deduction is not limited to the gross proceeds from the sale of the vehicle, the donor must get a written appraisal of the vehicle. Table 4 lists the donor’s recordkeeping and filing requirements. We profiled 10 charities that operated a vehicle donation program as of August 2007. Tables 5 through 14 highlight information provided to us in interviews with charity officials about their vehicle donation programs. Effects of rule changes reported by charity: Number and quality of vehicles donated The total number of donated vehicles has decreased since the rules changed. The quality of vehicles has stayed about the same since before the rules changed. The quality of the vehicles may be related to the demographics of the state. In one area of the state where this charity operates, the charity tends to receive higher-quality vehicles because many of the residents are retirees who are financially comfortable and able to donate their old vehicles, which are still in good condition, while donors in less-affluent parts of the state tend to donate vehicles that need to go straight to salvage yards. The net revenue from the donated vehicles decreased since the rules changed. To increase other revenue-producing activities to offset decreases in revenue from the vehicle donation program, the charity started to increase the number of special events, such as tennis and golf tournaments and fund-raising dinners. In 2006, the charity held over 35 special events, which raised $1.2 million. The charity decreased the number of staff working for the vehicle donation program and streamlined the business operations by consolidating all locations that ran the vehicle donation program in the state into one office. The charity uses a scrap recovery company for the vehicles that need to go to the salvage yard. Administrative burden increased because of more time-consuming paperwork. The IRS guidance and documents have improved since the vehicle donation rules originally changed and they are now clearer and simpler to use. Donors are asking about the same number of questions as before the rules changed. Many vehicle auctions that include vehicles donated to charities are only open to dealers, not the public. Consequently, the cars are not selling for what they would in a retail market. Effects of rule changes reported by charity: Number and quality of vehicles donated The charity did not experience any change in the number or quality of vehicles donated even though there are now more programs with vehicle donations in its geographical area. The rules changes have not affected donors’ decisions about whether to donate a vehicle based on what they can claim on their tax returns. The net revenues from the vehicle donation program have not decreased. Business operations The charity repairs most of the donated vehicles at an auto-repair shop that it uses to train people in automotive repair. It then sells the vehicles to the public for $800 to $2,500. The charity operated in the same manner prior to the rule change. Administrative burden increased slightly, but officials are able to handle the increase. Some donors do not want to provide their Social Security numbers. The charity informs the donors that if they do not provide their Social Security numbers, then they cannot claim more than $500 on their tax returns. It is a burden on the charities to obtain the donors’ Social Security numbers. The guidance on the rule changes is better than it was in the past because it is more user-friendly. The charity has not noticed any increase in the number of questions donors ask. Donors do not ask many questions. Effects of rule changes reported by charity: Number and quality of vehicles donated The total number of vehicles donated has decreased from 2006 to 2007. Some of the decrease in the number of donated vehicles may be because of the slowing of the economy, which results in fewer people buying new vehicles. Vehicle donations often follow the trend in new vehicle sales. The charity official said that the quality of those vehicles has increased. The increase in net revenue from the vehicle donation program is in part because of the charity’s ability to use one of the exceptions to the gross proceeds of sale rule. This charity is a licensed motor vehicle dealer in its state. The charity operated in the same manner prior to the rule changes. The charity spends a lot of money on donor mailings and has experienced an increase in administrative burden because the charity sends out a donor package with explanations of how the vehicle will be used and the acknowledgment for the donation and then must also send the Form 1098-C to the donor. The paperwork has quadrupled but the charity can handle it, according to the official. Donors are reluctant to provide their Social Security numbers. To help deal with this, the charity explains in a letter the need for the donor’s number in order to claim a deduction for a donated vehicle. Donors are asking more questions since the change in the rules than they did before. The charity official was concerned about the lack of oversight of third- party contractors. Effects of rule changes reported by charity: Number and quality of vehicles donated The number of vehicles donated decreased. The quality of the vehicles donated has not been an issue since the rule changes. The net revenue from donated vehicles has decreased. Revenue from vehicles decreased more than $3,000,000 in both 2005 and 2006 in comparison to 2004. The decline in total charity revenue in 2005 was greater than the decline in net vehicle donation revenue. This was in part because of a decrease in bequests. To offset the decrease in revenue from vehicle donations, the charity decreased some services, such as the hours of operation for a homeless shelter from 24 hours per day to the evening hours only. The charity also was unable to expand existing services or start new services. In addition, the charity reduced staff, instituted a hiring freeze, and has not given or has postponed merit pay raises. An increase in grants and bequests has offset some of the loss from vehicle donations; however this revenue is often earmarked for certain activities or programs and may not be used for general program administration. The charity operates its own auctions, which it also did prior to the rule changes. The number of forms the charity must complete has increased, but the charity has also become more efficient in using technology to handle the increase. Donors are nervous about providing their Social Security numbers for fear of identity theft. The charity informs the donors that if they do not provide their Social Security numbers, then they cannot claim more than $500 on their returns. The officials did not identify anything in the IRS publications that was unclear. The donors do not ask too many questions, mostly doing their own research before donating a vehicle. The charity operates its vehicle donation program in-house and does not share the revenue from the program with a for-profit entity. Thus, all the revenue from the vehicle donation program is used to further the charitable purpose. In the officials’ opinion, the donors of vehicles to charities that operate programs in-house and retain at least 80 percent of the net proceeds from the sales should be able to claim the fair market value of their vehicles on their tax returns. IRS could revise its guidance for the interpretation of the furtherance of charitable purposes exception to incorporate this concept. The officials have developed a legislative proposal that incorporates this concept and presented it to their state’s congressional delegation. Effects of rule changes reported by charity: Number and quality of vehicles donated Although the changes may have had some effects on the number of vehicles donated, the charity has decreased its efforts to advertise this program in the past couple of years. The quality of the vehicles has decreased, and the charity has received more calls from donors of vehicles that need to go to the salvage yard. Because the charity raises most of its revenues through other means, the lost revenue from the vehicle donation program did not have a large impact. No changes have been made to the business operations; however, the charity used to advertise in newspapers and had banners around town. It no longer uses those methods to advertise. Recordkeeping and reporting requirements have increased. IRS publications and guidance were not disseminated in a timely manner. Donors are asking more clarifying questions about claiming a deduction for a donated vehicle, but overall donors are asking about the same amount of questions since the rules changed as they did before they changed. Effects of rule changes reported by charity: Number and quality of vehicles donated Initially, the number of vehicles decreased; however, the charity expects to receive about 330 vehicles in 2007. The quality of the vehicles has increased since the rules changed. For example, in 2002 the charity was only able to use about 75 vehicles to give to low-income families or individuals, but in 2006 it was able to give about 200 vehicles to low-income families or individuals. Thus, although the number of donated vehicles increased in 2006, it sold fewer vehicles, resulting in a decrease in net revenue from donated vehicles. The charity did not need to change its revenue-producing activities because of the program it operates in conjunction with the state. Business operations The charity inspects and repairs donated vehicles which are given to low-income families and individuals. A department in the state that trains and finds jobs for low-income people pays the charity a flat fee per vehicle; the department's staff refers selected families or individuals to receive a vehicle from the charity for $1 after the vehicle has been repaired. The charity operated in the same manner prior to the rule change. The charity sells high-end vehicles on online auction sites because low- income families or individuals may not be able to maintain these vehicles. Since high-end vehicles tend to have higher upkeep prices, the charity was concerned that the needy families would not be able to fix the vehicles if they broke down. This allows the charity to obtain more revenue than at a wholesale auction, since the general public is bidding on the vehicle and not just wholesale buyers. The charity uses the revenue to finance its program. The amount of paperwork has increased, but the charity has been able to handle the increase. The charity has not experienced any reluctance from donors when requesting Social Security numbers. The charity explains the need for the numbers in the letter sent to the donors. Although IRS publications and notices have been generally clear, the officials were confused about the correct timing for sending the acknowledgments to the donors, questioning if it was 30 days from receipt of the vehicle or 30 days from when the charity evaluated the vehicle. Donors are asking more questions; however, the charity is now receiving fewer calls from donors who want to donate vehicles that need to go to the salvage yard. Effects of rule changes reported by charity: Number and quality of vehicles donated The quantity and quality of vehicles have decreased. The number decreased because donors are not able to receive fair market value for their donations. Even though the number of vehicles donated in 2007 increased compared to 2006, more of the vehicles need to be sent to salvage yards. Net revenue from vehicle donations decreased. The vehicle donation program contributes only a small part to the charity’s overall revenue. Business operations The charity recently changed its business operations from operating the program in-house to contracting with a third party. Officials decided to use a third-party contractor because it was not cost- effective to operate in-house because of the decrease in donated vehicles. Administrative burden has increased because it takes time to complete the Form 1098-C and to prepare the acknowledgments that are sent to the donors. Furthermore, the charity has to keep track of each sale in order to provide donors with the required information about the sale of the vehicle. However, the charity official noted it was able to handle the increase in paperwork. Donors were asking more questions about the rule changes in the first 6 months after the change. Now, most people understand the rule changes. Effects of rule changes reported by charity: Number and quality of vehicles donated The number of vehicles donated decreased. The quality of donated vehicles has remained constant. However, in 2007, the charity was receiving about $130 more per vehicle because of an increase in the value of steel and a decrease in fees associated with the auctioning of vehicles. Revenue from donated vehicles as a percentage of gross income has decreased from almost 34 percent in 2003 to less than 19 percent in 2006. Total charity revenues increased because of changes in other fund-raising activities. The charity increased other revenue-producing activities, such as charity walks, golf tournaments, and other special events, to help offset the revenue losses from decreased vehicle donations. The reduced revenue from vehicle donations has affected some local affiliates and led to reductions in staff and some office closures. The charity changed its marketing for vehicle donations and shifted the focus more to the Internet. More donations are occurring from online donors. There is more competition from other charities; therefore more money is now being spent on marketing the program. Because of the increase in the price of steel, the charity now sells a higher percentage of vehicles at auction instead of sending some to scrap yards. Administrative burden has increased, specifically with regard to donors not wanting to provide Social Security numbers over the telephone. Based on a survey the charity conducted, donors prefer to provide their Social Security numbers in writing rather than over the telephone. This results in more work for staff. Generally the IRS publications and notices have been clear. Some affiliates thought the instructions for electronic filing could be made clearer. Generally, donors are asking fewer questions about the donation process than they were right after the rule changes took affect. More vehicles are sent to auction because the price of steel has increased, which helps ensure that the charity receives more revenue per vehicle than if the vehicle was sent to the scrap yard. However, there are also increased fees associated with auctioning vehicles versus sending them to scrap yards, so this decreases the profit ratio. Effects of rule changes reported by charity: Number and quality of vehicles donated The charity received fewer donated vehicles. This may be in part because of the rule changes and in part because of an increase in competition for the donated vehicles. The quality of the vehicles has decreased. For example, in 2006 the charity spent about $156,000 on parts in order to repair donated vehicles, which is more than it spent in previous years. This may be because of potential donors selling or trading higher-value vehicles instead of donating them because they can get more money for the vehicles. The revenue from donated vehicles has decreased since the rules changed. The charity is licensed as a used car dealer and was also licensed as a used car dealer prior to the rule changes. The charity sells about 45 percent of donated vehicles at retail prices. Most of the vehicle donation revenues are from the vehicles sold at a used car lot. The charity makes repairs to some of these vehicles before selling them. As a result, some vehicles are eligible for the material improvement exception, and the donors can claim fair market value. The other vehicles are sold to a wholesaler or to salvage yards for the value of the metal. Administrative burden has increased because of the notification and filing requirements. Some donors have not wanted to provide their Social Security numbers. The IRS publications and guidance are clear. The charity received a lot of questions about the changes to the rules in 2005, but now donors are not asking as many questions. The charity does not have any plans to eliminate the vehicle donation program. Even though it now receives fewer vehicles and lower-quality vehicles, it believes that the program is still worth operating. This is in part because of the flexibility in how the charity can use the revenues from the vehicle donation program. It uses the revenues from the vehicle donation program to pay for services that are not covered by federal grants. Effects of rule changes reported by charity: Number and quality of vehicles donated The number of vehicles donated decreased; however, the quality of the individual vehicles has increased. The number of vehicles donated that could be sold to low-income individuals or families has about doubled since the changes in the rules. The charity has doubled revenues from the vehicle donation program since it started operating a used car lot at the end of 2005. The charity sells vehicles to low-income families for low amounts generally in the range of $900 to $1,100. Recipients of the vehicles must be sponsored by a social service agency and live in certain geographic areas. The charity operated in the same manner prior to the rule change. The charity receives a higher dollar figure per vehicle than many of the other large charities receive because it sets a minimum value on all of the vehicles sent to wholesale auction. At auction, if a vehicle does not receive a bid equal to or greater than the minimum, it is not sold and the charity will try to sell it again at another auction. This ensures that the sale of the vehicle will bring in more revenue. The charity began operating a used car lot at the end of 2005. Administrative burden has increased because of the reporting requirements. About 1 in 20 donors do not want to provide their Social Security numbers. IRS guidance is fairly clear; however the charity was disappointed by the lack of publicity for the use of the exceptions to the gross proceeds of sale rule. Because of this, the charity had to advertise the exceptions and explain that donors may be able to claim greater deductions if they donate their vehicles to it rather than to another charity. After this, the number of donated vehicles began to increase. The charity receives more questions from donors, mostly relating to how much can be claimed on donors’ tax returns. | In 2003, GAO found that many taxpayers' estimates of the value of their vehicles, claimed as tax deductions, were in excess of the charities' subsequent sales of the vehicles. Subsequently, effective January 1, 2005, the rules related to the amount taxpayers can claim as a deduction on their tax returns for vehicles donated to charities changed. Under the new rules, in many cases the amount taxpayers are allowed to claim as a deduction is less than they could have claimed before the changes. Some charities that used vehicle donations as a revenue source said that the changes could lead to fewer donated vehicles and reduced revenues. GAO was asked to determine how charities have been affected by the 2005 changes. GAO discussed the rule changes with Internal Revenue Service (IRS) officials and the impact of the changes with representatives of several charities. GAO judgmentally selected 10 charities from among the 65 contacted in the course of the 2003 GAO study. The experiences of these charities cannot be generalized to all charities because the selected charities were not drawn from a statistical sample of all charities with vehicle donation programs. The selected charities GAO contacted reported mixed experiences after the rules for claiming a tax deduction for donating a vehicle were changed. Prior to the law change, taxpayers could claim estimated fair market value for any donated vehicle. However, beginning January 1, 2005, taxpayers are generally limited to deducting only the sales price of the vehicle when a donated vehicle is sold by the charity. The 10 charities GAO contacted reported varied experiences in the number of, quality of, and revenue from donated vehicles; some changes in their business operations; and mixed experiences with administering the changes in the rules. Of these 10 charities, when comparing 2003 to 2006, 6 reported decreases in the number of vehicles donated and some of these decreases were substantial. Also, 3 charities reported an increase, and 1 did not provide data. Three reported an increase in quality, 3 a decrease, and 4 no change. Six reported a decrease in vehicle donation revenue from 2003 to 2006, 3 an increase, and 1 did not provide data. GAO did not find a consistent pattern when comparing the number of donated vehicles with the revenue from the vehicle donation program or a charity's overall revenue. In response to the rule changes, some charities changed their fund-raising activities and some decreased services, such as reducing the hours for providing services. Examples of business operations changes include using minimum bids at auctions, selling vehicles online, and selling vehicles directly to the public instead of through wholesalers. Finally, all 10 reported increased administrative burden due to increased reporting requirements, but they were able to accommodate the increase in paperwork. |
The Army’s reserve components are the Army Reserve and Army National Guard. The Army Reserve is comprised of units that support combat forces and is restricted to federal missions. The Guard has both combat and support units and federal and state responsibilities. The Guard is to be organized and resourced for federal wartime missions, according to Guard policy. Federal missions range from participating in full-scale military conflicts to operations other than war, backfilling active forces deployed on operational missions, providing training support to the active component, supporting domestic disaster relief and emergency operations under federal control, and providing strategic reserve forces to meet unknown contingencies. The Guard’s state missions typically involve support for state officials and organizations during domestic civil emergencies and natural or man-made disasters. The size of DOD’s forces and budgets has declined with the end of the Cold War and pressures to reduce the deficit. In 1989 the Guard had about 457,000 personnel. By the end of fiscal year 1996, the Guard plans to have 373,000 personnel in 54 separate state and territorial military commands in the 50 states, District of Columbia, Puerto Rico, U.S. Virgin Islands, and Guam. About 161,000 Guard personnel are to be in 42 combat brigades, including 67,000 in 15 enhanced brigades. The remaining 212,000 personnel are in headquarters units and units that support combat. By the end of fiscal year 1999, the Guard plans to be down to 367,000 personnel, with about 187,000 personnel in the combat units, including the 67,000 in the enhanced brigades. The Guard’s 42 combat brigades are organized as follows: 8 divisions comprised of 3 brigades each, 15 enhanced brigades, and 3 separate combat units, consisting of 2 separate brigades and a scout group. In addition to the combat units, the Guard has elements that support combat units, such as engineers, military police, military intelligence, and transportation. The enhanced brigade concept, described in DOD’s 1993 Report on the Bottom-Up Review, became effective on October 1, 1995. The concept provides for 15 separate brigades that are not part of a divisional structure during peacetime and that are required to be ready to deploy at the Army’s highest readiness level within 90 days of mobilization. The enhancements, according to the bottom-up review, are training and resources above those provided to the Guard’s other combat forces. The enhancements are to enable the 15 brigades to achieve peacetime readiness goals so that they can meet their deployment criteria by the end of fiscal year 1998. The President’s budget request for fiscal year 1996 included $5.5 billion for the Guard, which represents about 2.2 percent of DOD’s budget request and 9.3 percent of the Army’s request. About $1.7 billion of the $5.5 billion request is for the Guard combat units. The remaining $3.8 billion is for such organizations as headquarters units and elements that support combat. These other organizations receive most of the funds because they include support elements that are the first to deploy. For fiscal year 2001, the Guard’s budget is projected to be about $6 billion, with about $1.8 billion for combat units. Table 1 further breaks down these budgets. In March 1993, DOD initiated a comprehensive bottom-up review to assess the nation’s defense strategy, force structure, and budgets to counter regional aggression in the post-Cold War environment. DOD judged it prudent to maintain the capability to fight and win two nearly simultaneous major regional conflicts. To execute the two-conflict strategy, DOD determined that the Army must maintain 10 divisions in the active forces augmented by 15 reserve enhanced brigades and associated support forces. The bottom-up review report stated that the reserve component must adapt to meet new challenges. Accordingly, this means making smarter use of reserve component forces by adapting them to new requirements, assigning them missions that properly use their strengths, and funding them at a level consistent with their expected missions during a crisis or war. The bottom-up review concluded that the Army’s reserve components should be reduced to 575,000 personnel by 1999—a 201,000 decrease since fiscal year 1989. The review specified that the reserve components’ combat structure would be about 37 brigades, 15 of which would be enhanced. A group of senior officers of the Army, its reserve components, and organizations that represent Army component issues was tasked with providing a recommendation to the Secretary of the Army on the allocation of the 575,000 personnel between the Guard and the Army Reserve. The group allocated 367,000 personnel to the Guard and 208,000 to the Army Reserve. In addition to the 15 enhanced brigades specified in the bottom-up review, the Guard, in concert with the Army, determined that it would retain 8 combat divisions, 3 separate combat units, and numerous support units. The Guard’s eight combat divisions and three separate combat units are not required to accomplish the two-conflict strategy, according to Army war planners and war planning documents that we reviewed. The Army’s war planners at headquarters and at U.S. Forces Command stated that these forces are not needed during or after hostilities cease for one or more major regional conflicts. Moreover, the Joint Chiefs of Staff have not assigned the eight combat divisions or the three separate combat units for use in any major regional conflict currently envisioned in DOD planning scenarios. The missions for these divisions and units, according to the bottom-up review, include (1) providing the basis for rotation when forces are required to remain in place over an extended period after the enemy invasion has been deterred, (2) serving as a deterrent hedge to future adversarial regimes, and (3) supporting civil authorities at home. According to Army officials involved in the review, there was no analysis to determine the appropriate number of forces required to perform these missions. The Guard’s 15 enhanced brigades are the principal reserve component ground combat forces. The bottom-up review report states that one important role for these brigades is to supplement active component divisions, should more ground combat power be needed to deter or fight a second major regional conflict. Although the bottom-up review specified a need for 15 enhanced brigades and the Joint Chiefs of Staff have made all 15 brigades available for war planning purposes, the planners have identified requirements for less than 10 brigades to achieve mission success in the war fight. However, these plans are evolving and the number of brigades required may change. This lesser number of brigades is generally consistent with the required reserve combat forces included in the Army’s current Total Army Analysis process. That process projects the Army’s future support needs based on the future combat force. According to U.S. Forces Command planners, the enhanced brigades that are not required to achieve mission success in the war fight are considered to be strategic reserve that can either be used for occupational forces once the enemy has been defeated or for other missions. Other roles would be to replace active forces stationed overseas or engaged in peacekeeping operations should the replaced forces be needed for a regional conflict. The Guard has a wide range of state missions. These missions include the defense of states or other entities from disorder, rebellion, or invasion; emergency and disaster relief; humanitarian assistance; and community support activities. In crisis situations, the governors primarily use the Guard to supplement civil agencies after those agencies have exhausted their resources. According to Guard officials at the state level, the state expects the local authorities to respond first, followed by county, and then state resources. If the crisis exceeds the state’s civil capabilities, the Guard can be called on for added support. For example, needs far exceeded the state’s civil agencies’ capabilities after Hurricane Andrew devastated south Florida. Therefore, the Governor called up almost 50 percent of Florida’s Army and Air Guard personnel for such tasks as providing temporary shelters, removing debris, distributing food and water, and providing security. For situations beyond a state’s capabilities, the Governor can ask the President to declare a federal emergency. When this declaration is made, the Federal Emergency Management Agency becomes the coordinating agency between state and federal agencies. For example, Florida’s immediate assistance needs after Hurricane Andrew exceeded the capacity of the state’s resources, including its Guard forces. As a result, the Governor requested and received a presidential disaster declaration that entitled the state to obtain federal funding and assistance from federal agencies and the active military. The federal government has added several domestic initiatives to the Guard’s federally funded state missions. For example, newly acquired initiatives include drug interdiction and counter-drug activities, drug demand reduction programs, medical assistance in underserved areas, and the Civilian Youth Opportunities program. Although federally funded, the state governors authorize missions like these under the control of authorized Guard officials. Given the concerns for potential hardships to Guard members, their families, and their employers, most state Guard leaders plan to rotate Guard members used in state missions lasting longer than 7 days. For example, in both the Midwest floods of 1993 and Hurricane Andrew in 1992, Guard personnel were rotated, which resulted in the use of a greater number of personnel, but for shorter durations. Guard officials at the state level said that general soldier skills, such as discipline and following a chain of command, are often all that are needed to satisfy state missions. In the specialized skills areas, they said that support skills and equipment such as engineering, transportation, medical support, aviation, and military police are most often needed. In the states we visited, we were told that Guard members were asked to perform a variety of tasks on state active duty. For example, in California, the Guard provided homeless shelters for people displaced by major earthquakes, patrolled the streets of Los Angeles during a riot, and provided support to firefighters during wild fires. In Kansas and Utah, Guard members filled sandbags to fight flooding. In the previously mentioned study, which was required by the National Defense Authorization Act for Fiscal Year 1994, RAND reported that the Army and Air Guard in fiscal year 1993 experienced the highest number of state active duty days in over 10 years. The 54 state and territorial Guard entities reported spending over 460,000 duty days on state missions, involving over 34,000 members of the total Guard. This equated to about 6 percent of the total available Army and Air Guard personnel. Almost 50 percent of the Guard’s use that year was due to the Midwest floods. As might be expected, Guard usage for state missions varies from state to state and year to year. For example, RAND reported that the Florida Army and Air Guard were on state active duty in 1992 for Hurricane Andrew for over 80 days, with a peak personnel commitment of some 6,200 out of a total strength of about 13,500, or about 46 percent. RAND also reported that New York, with an Army and Air Guard strength of about 20,000, had its highest Guard usage in 6 years in 1994. During that year, the state used about 6,000 Guard workdays, which amounts to about 1 state active duty day per year for about 30 percent of the state’s total Army and Air Guard strength. This latter experience is typical of many states during the same period. RAND reported that, nationally, state demands on the Army and Air Guard are not significant. Moreover, the Guard’s own data do not show sizable demands on its personnel and resources for state missions. As such, RAND concluded that, even in a peak use year, state missions would not require a large portion of the Guard and should not be used as a basis for sizing the Guard force. It also concluded that the Guard is large enough to handle both state and federal missions, even in the unlikely, but possible, event of simultaneous peak demands. The Army is studying the redesign of the Guard’s combat structure to meet critical shortages in support capabilities. In May 1995, the Army’s Vice Chief of Staff chartered a work group to develop alternatives and make recommendations for using the Guard’s combat structure to meet critical shortages in support forces. According to the group’s charter, the Army has undertaken this effort because it is critically short support forces, but continues to maintain Guard combat units that are excess to war-fighting requirements. review the Army’s future unresourced support requirements, review the structure and missions of the Guard combat elements and develop options for changing the structure to meet future Army requirements, conduct a resource feasibility assessment of the options to determine whether the Army possesses or is able to program the resources needed to equip and maintain the redesigned structure, and refine and prioritize the options for presentation to the Army leadership by March 1996. The group’s charter established certain parameters such as (1) the Guard’s planned end strength will not change, (2) the redesign efforts will consider the Guard’s need to remain responsive to state missions, and (3) the redesign effort is not intended to reduce the number of Guard division headquarters. Previous studies have also recognized the need for changes to the Guard’s combat structure. In December 1992, we reported that opportunities existed to break up some Guard divisions and convert some combat units to support units. In March 1995, we reported that the Army would be challenged to provide sufficient numbers of certain types of support units for two major regional conflicts because it had difficulty providing such units in the single conflict Persian Gulf War. We suggested that an option for augmenting the Army’s support capability is to use existing support capability in the eight Guard divisions that DOD did not include in the combat force for executing the two-conflict strategy. We recommended that the Secretary of the Army (1) identify the specific unresourced support requirements that could be met using Guard divisional support units and the personnel and equipment in these units and (2) work with the Guard to develop a plan for employing this capability. The work group is considering this recommendation as one of the options. In accordance with the National Defense Authorization Act for Fiscal Year 1994, DOD established a Commission on Roles and Missions of the Armed Forces, which looked at, among other things, the better use of reserve forces. The Commission determined that the Army’s combat structure exceeds the requirements for a two major regional conflict scenario and concluded that reserve component forces with lower priority tasks should be eliminated or reorganized to fill shortfalls in higher priority areas. In its report, the Commission cited the example of the Army’s eight Guard divisions that were required for possible war with the former Soviet Union, but are not needed for the current national security strategy. The report noted that the bottom-up review did assign the eight Guard divisions secondary missions such as serving as a deterrent hedge to future adversarial regimes; however, it also said that eight divisions is too large a force for these secondary missions. The Commission’s report also noted that at the same time, the Army estimated that it is short 60,000 support troops for a two regional conflict strategy. The Army’s most recent Total Army Analysis process also projects a shortage of 60,000 support troops, primarily in transportation and quartermaster units. The Commission report also stated that, even after the support shortfalls were filled, there would still be excess combat spaces in the total Army and recommended eliminating these spaces from the active or reserve components. The end of the Cold War and budgetary pressures have provided both the opportunity and the incentive to reassess defense needs. Because the Guard’s combat forces exceed projected war requirements and the Army’s analysis indicates a shortage of support forces, we believe it is appropriate for the Army to study the conversion of some Guard combat forces to support roles. Therefore, we recommend that the Secretary of Defense, in conjunction with the Secretary of the Army and the Director, Army National Guard, validate the size and structure of all of the Guard’s combat forces and that the Secretary of the Army prepare and execute a plan to bring the size and structure of those combat forces in line with validated requirements. If the Army study suggests that some Guard combat forces should be converted to support roles, we recommend that the Secretary of the Army follow through with the conversion because it would satisfy shortages in its support forces and further provide the types of forces that state governors have traditionally needed. Moreover, to the extent that there are Guard forces that exceed validated requirements, the Secretary of Defense should consider eliminating them. DOD agreed with our findings and recommendations. It stated that before its review is finalized, all shortfalls will be validated against requirements set forth in the national military strategy. It also stated that until ongoing studies are completed, it is premature to restructure or eliminate Army National Guard units. DOD’s comments are shown in appendix I. To determine the federal and state roles and missions of the Guard’s combat units, we interviewed cognizant officials and obtained and analyzed documents from DOD, the Army, the Army National Guard, and RAND in Washington, D.C.; U.S. Army Forces Command, Fort McPherson, Georgia; and State Area Commands and combat units in Alabama, California, Kansas, South Carolina, Utah, Virginia, and Washington. To determine the efforts by DOD and the Army to redesign the Guard combat divisions, we interviewed cognizant officials and obtained and analyzed documents from DOD, the Army, the Army National Guard, U.S. Army Forces Command, and the U.S. Army Training and Doctrine Command’s Force Development Directorate, Fort Leavenworth, Kansas. We conducted this review from February to November 1995 in accordance with generally accepted government auditing standards. We are providing copies of this report to appropriate House and Senate committees; the Secretaries of Defense and the Army; the Director of the Army National Guard; and the Director, Office of Management and Budget. We will also provide copies to other interested parties upon request. Please contact me at (202) 512-3504 if you or your staff have any questions concerning this report. The major contributors to this report were Robert Pelletier, Leo Sullivan, Lee Purdy, and Ann Borseth. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the: (1) roles and missions of the Army National Guard's combat units; and (2) efforts by the Department of Defense (DOD) and the Army to redesign the Guard's combat divisions. GAO found that: (1) despite reductions, the Army National Guard's combat units may still be too large for projected war requirements; (2) the Guard's eight remaining combat divisions and three separate combat units are not needed to meet the two-conflict strategy or any probable conflict scenarios; (3) DOD considers the excess Guard forces to be a strategic reserve that could be used as occupational and rotational forces, a deterrent against aggressive regimes, and support for civilian authorities, but it did not present any analytical support for the continued force levels; (4) DOD has not finalized plans for 15 enhanced Guard combat brigades, and fewer than 15 may be needed; (5) state missions often require more support skills and equipment than Guard combat forces, which usually supplement other state resources in emergencies; (6) over the last decade, states have needed only a fraction of the Guard's personnel to meet their emergency requirements; (7) the Army is studying ways that the Guard could meet critical shortages in its support capabilities; and (8) DOD believes that since Guard forces exceed combat needs, reserve components with lower priority tasks should be eliminated or reorganized to meet higher priorities. |
Responsibility for helping to prepare members of the community for all hazards is shared by federal, state, local, and tribal entities, and nongovernmental organizations. At the federal level, FEMA is responsible for developing national strategies, policies, and guidelines related to emergency preparedness, response, and recovery. To achieve the goals of a national strategy, however, requires a close relationship with nonfederal partners, based on the premise that resilient communities—those that can quickly recover from a disaster—begin with prepared individuals and depend on the leadership and engagement of local government and other community members. According to DHS, emergency management agencies at the jurisdiction level are to develop preparedness plans for their localities that are consistent with plans at the state and federal levels. States submit requests for federal Homeland Security funding for state, local, and regional projects, including projects related to community preparedness. FEMA is required under the Post-Katrina Emergency Management Reform Act of 2006 (Post-Katrina Act) to establish a National Preparedness System to ensure that the nation has the ability to prepare for and respond to disasters of all types, whether natural or man-made, including terrorist attacks. The Community Preparedness Division is responsible for leading activities related to community preparedness, including management of the Citizen Corps program. According to fiscal year 2008 Homeland Security Grant Program guidance, the program is to bring together community and government leaders, including first responders, nonprofit organizations, and other community stakeholders as a Citizen Corps Council to collaborate in involving community members in emergency preparedness, planning, mitigation, response, and recovery. Councils and partner programs register online to be included in the national program registries. The Community Preparedness Division also supports the efforts of non-DHS federal “partner programs,” such as the Department of Health and Human Services’ Medical Reserve Corps, which promote preparedness and the use of volunteers to support first responders. The CERT program’s mission is to educate and train people in basic disaster preparedness and response skills, such as fire safety, light search and rescue, and disaster medical operations, using a nationally developed, standardized training curriculum. Trained individuals can be recruited to participate on neighborhood, business, or government teams to assist first responders. According to FEMA officials, training is conducted by local government, typically the fire or police department, which also organizes and supports teams of the trained volunteers in neighborhoods, the workplace, and high schools. The mission of the Fire Corps program is to increase the capacity of fire and emergency medical service departments through the use of volunteers in nonoperational roles and activities, including administrative, public outreach, fire safety, and emergency preparedness education. FEMA is also responsible for a related program, the Ready Campaign, which works in partnership with the Ad Council, an organization that creates public service announcements (PSA), with the goals of raising public awareness about the need for emergency preparedness, motivating individuals to take steps toward preparedness, and ultimately increasing the level of national preparedness. The program makes preparedness information available to the public through its English and Spanish Web sites (www.ready.gov and www.listo.gov), through printed material that can be ordered from the program or via toll-free phone lines, and through PSAs. The Ready Campaign message calls for individuals, families, and businesses to (1) get emergency supply kits, (2) make emergency plans, and (3) stay informed about emergencies and appropriate responses to those emergencies. FEMA faces challenges in measuring the performance of local community preparedness efforts because it lacks accurate information on those efforts. FEMA is also confronted with challenges in measuring performance for the Ready Campaign because the Ready Campaign is not positioned to control the placement of its preparedness messages or measure whether its message is changing the behavior of individuals. According to FEMA officials, FEMA promotes citizen preparedness and volunteerism by encouraging collaboration and the creation of community Citizen Corps, CERT, and Fire Corps programs. FEMA includes the number of Citizen Corps Councils, CERTs, and Fire Corps established across the country as its principal performance measure for community preparedness. However, FEMA faces challenges ensuring that the information needed to measure the number of established, active units is accurate. In our past work we reported on the importance of ensuring that program data are of sufficient quality to document performance and support decision making. FEMA programs report the number of local units registered nationwide as a principal performance measure, but FEMA does not verify that the registration data for Citizen Corps Councils, CERT, or Fire Corps volunteer organizations are accurate. Our work showed that the number of active units reported may differ from the number that actually exists. For example, as of September 2009 we found the following. Citizen Corps reported having 2,409 registered Citizen Corps Councils nationwide that encompass jurisdictions where approximately 79 percent of the U.S. population resides. However, of the 17 organizations registered as councils that we contacted during our site visits, 12 were active and 5 were not active as councils. The CERT program reported having 3,354 registered CERTs. Of the 12 registered CERTs we visited, 11 reported that they were actively engaged in CERT activities, such as drills and emergency preparedness outreach, or had assisted in an emergency or disaster. The 12th registered CERT was no longer active. State officials in two of the four states we visited also said that the data on the number of registered programs might not be accurate. A state official responsible for the Citizen Corps Council and CERT programs in one state estimated that as little as 20 percent of the registered councils were active, and the state subsequently removed more than half of its 40 councils from the national Web site. Officials in another state said that the database is not accurate and they have begun to send e-mails to or call local councils to verify the accuracy of registrations in their state. These officials said that they plan to follow up with those councils that do not respond, but they were uncertain what they planned to do if the councils were no longer active. These results raise questions about the accuracy of FEMA’s data on the number of councils across the nation, and the accuracy of FEMA’s measure that registered councils cover 79 percent of the population nationwide. Although changes in the number of active local programs can be expected based on factors, including changes in government leadership, voluntary participation by civic leaders, and financial support, a FEMA official responsible for the Citizen Corps program acknowledged that the current program registration lists need to be verified to determine whether they are accurate. The official said that FEMA has plans for improving the accuracy of the data as part of a new online registration process for Citizen Corps Councils and CERTs in 2010, which would involve reregistering local programs with the goal of reactivating inactive programs, although it is likely that some inactive programs would be removed from FEMA’s registries. However, it is possible that registration data could continue to be inaccurate because, according to a FEMA official, the Citizen Corps program does not have the authority to require all local units to update information, particularly councils or CERTs that do not receive federal funding. Furthermore, FEMA officials explained that the Homeland Security Grant Program guidance designates state officials as responsible for approving initial council and CERT registrations and ensuring that the data are updated as needed and said that under the new registration process, state officials will continue to be responsible for ensuring that data are updated as needed. A Citizen Corps official told us that the Community Preparedness Division does not monitor whether states are regularly updating local unit registration information as they do not have the staff or processes in place to monitor states’ efforts and the Division would look to regional staff to work with state officials. The official said that FEMA is considering the possibility of providing contract support to states that request assistance in contacting local programs as part of the re-registration effort. A key FEMA official told us that they recently drafted a new strategic approach and are considering developing and using outcome measures that are focused on the achievements of Citizen Corps programs as well as the number of programs, as is currently measured. Outcome measures are important because a registered program being active is only a first step in measuring whether local programs are meeting intended program goals. However, our review of the draft showed that it does not state what actions FEMA intends to take to ensure that registrations are accurate and remain up-to-date. Therefore, FEMA does not have reasonable assurance that its data about the number of registrations for local Citizen Corps programs are accurate, which may affect its ability to measure the results of those programs. By developing an approach to ensure the accuracy of local Citizen Corps program data, FEMA managers and others would be better positioned to understand why Citizen Corps programs that no longer exist were disbanded, possible strategies for reconstituting or creating new programs, and a foundation for developing outcome measures that gauge whether local programs are achieving goals associated with enhancing community preparedness. Currently, the Ready Campaign measures its performance based on measures such as materials distributed or PSAs shown. For example, according to a DHS official, in fiscal year 2008 the Ready Campaign had more than 99 million “hits” on its Web site, more than 12 million pieces of Ready Campaign literature requested or 43,660 calls to the toll-free numbers. The Ready Campaign relies on these measures because it faces two different challenges in determining whether its efforts are influencing individuals to be more prepared. First, the Ready Campaign is not positioned to control when or where its preparedness message is viewed. Second, the Ready Campaign is not positioned to measure whether its message is changing the behavior of individuals. With regard to the Ready Campaign’s ability to control the distribution of its message, our past work has shown that it is important for agencies to measure their performance based on clear and reliable data that are linked to program goals, but also recognizes that agencies whose programs rely on others to deliver services, like the Ready Campaign, may need to use substitute measures—such as counts of Web site hits and the number of television announcements—which are not linked to outcomes. According to FEMA’s Acting Director for the Ready Campaign, the program budget of $2.5 million for 2010 limits the extent to which they could produce advertisements and purchase commercial space for their placement. The PSAs developed by the Ad Council cannot be used for purchased media placement; rather, the Ready Campaign relies on donations of various sources of media. As a result, the Ready Campaign does not control what, when, or where Ready Campaign materials are placed when the media are donated. For example, what PSA is shown and the slots (e.g., a specific channel at a specific time) that are donated by television, radio, and other media companies are not under the Ready Campaign’s control, and these are not always prime viewing or listening spots. On the basis of a review of Ad Council data, the Ready Campaign’s PSAs in 2008 were aired about 5 percent or less of the time by English language and Spanish language television stations during prime time (8:00 pm to 10:59 p.m.), and about 25 percent of the PSAs were aired from 1:00 a.m. to 4:00 a.m. Similarly, about 47 percent of English language radio and about 27 perc of Spanish language radio spots were aired from midnight to 6:00 a.m. FEMA officials said because new material is more appealing to PSA directors, they expect better placement with the new PSAs released in September 2009. In November 2009, a FEMA official told us that the n PSAs had been released, but information was not yet available to show whether the new material had received better placement. Just as the Ready Campaign has no control over the time PSAs are aired, it does not control the type of media (e.g., radio, television) donated. Based on Ad Council data on the dollar value of media donated to show Ready Campaign materials (the value of the donated media is generally based on what it would cost the Ready Campaign if the media space were purchased), much of the value from donated media is based on space donated in the phone book yellow pages. Figure 1 shows the value of various types of media donated to the Ready Campaign to distribute its message during 2008. The Ready Campaign also faces a challenge determining the extent to which it contributes to individuals taking action to become more prepared—the program’s goal. Measuring the Ready Campaign’s progress toward its goal is problematic because it can be difficult to isolate the specific effect of exposure to Ready Campaign materials on an individual’s level of emergency preparedness. Research indicates that there may be a number of factors that are involved in an individual taking action to become prepared, such as his or her beliefs concerning vulnerability to disaster, geographic location, or income. One factor in establishing whether the Ready Campaign is changing behavior requires first determining the extent to which the Ready Campaign’s message has been received by the general population. The Ad Council conducts an annual survey to determine public awareness of the Ready Campaign, among other things. For example, the Ad Council’s 2008 survey found the following When asked if they had heard of a Web site called Ready.gov that provides information about steps to take to prepare in the event of a natural disaster or terrorist attack, 21 percent of those surveyed said that they were aware of the Ready.gov Web site. When asked a similar question about television, radio, and print PSAs, 37 percent of those surveyed said that they have seen or heard at least one Ready Campaign PSA. Another factor is isolating the Ready Campaign’s message from other preparedness messages that individuals might have received. The Ad Council’s 2008 survey found that 30 percent of those surveyed identified the American Red Cross as the primary source of emergency preparedness information; 11 percent identified the Ad Council. While the Ad Council survey may give a general indication as to the population’s familiarity with the Ready Campaign, it does not provide a measure of preparedness actions taken based on the Ready Campaign’s promotion; that is, a clear link from the program to achieving program goals. The Ad Council reported that those who were aware of the Ready Campaign’s advertising were significantly more likely than those who had not seen it to say that they had taken steps to prepare for disaster, but acknowledged that the Ready Campaign could not claim full credit for the differences. Further, as previous Citizen Corps surveys showed, the degree to which individuals are prepared may be less than indicated because preparedness drops substantially when more detailed questions about specific supplies are asked. While DHS’s and FEMA’s strategic plans have incorporated efforts to promote community preparedness, FEMA has not developed a strategy encompassing how Citizen Corps, its partner programs, and the Ready Campaign are to operate within the context of the National Preparedness System. An objective in DHS’s Strategic Plan for fiscal years 2008 through 2013 to “ensure preparedness” envisions empowering Americans to take individual and community actions before and after disasters strike. Similarly, FEMA’s Strategic Plan for fiscal years 2008 through 2013 envisions a strategy to “Lead the Nation’s efforts for greater personal and community responsibility for preparedness through public education and awareness, and community engagement and planning, including outreach to vulnerable populations.” FEMA’s Strategic Plan delegates to the agency’s components the responsibility for developing their own strategic plans, which are to include goals, objectives, and strategies, but does not establish a time frame for completion of the component plans. FEMA’s Strategic Plan states that the components’ strategic plans are to focus on identifying outcomes and measuring performance. NPD has not clearly articulated goals for FEMA’s community preparedness programs or developed a strategy to show how Citizen Corps, its partner programs, and the Ready Campaign are to achieve those goals within the context of the National Preparedness System. In our past work, we reported that desirable characteristics of an effective national strategy include articulating the strategy’s purpose and goals; followed by subordinate objectives and specific activities to achieve results; and defining organizational roles, responsibilities, and coordination, including a discussion of resources needed to reach strategy goals. In April 2009, we reported that NPD had not developed a strategic plan that defined program roles and responsibilities, integration and coordination processes, and goals and performance measures for its programs. We reported that instead of a strategic plan, NPD officials stated that they used an annual operating plan and Post-Katrina Act provisions to guide NPD’s efforts. The operating plan identifies NPD goals and NPD subcomponents responsible for carrying out segments of the operating plan, including eight objectives identified for the division under NPD’s goal to “enhance the preparedness of individuals, families, and special needs populations through awareness planning and training.” NPD’s objectives for meeting this goal did not describe desired outcomes. In late September 2009, NPD provided us a spreadsheet that was linked to the NPD operating plan which outlined more detailed information on NPD’s goals and objectives, such as supporting objectives, the responsible NPD division, and projected completion dates. However the spreadsheet lacked details about key issues, and did not include all of the elements of an effective national strategy. For example, one of NPD’s operating plan objectives—called a supporting goal in FEMA’s spreadsheet—for the Community Preparedness Division is to increase “the number of functions that CERTs will be able to perform effectively during emergency response,” but neither the plan nor the spreadsheet provide details, such as the functions CERTs currently perform, what additional functions they could perform, and what it means to be effective. The spreadsheet elaborates on this supporting goal with a “supporting objective” to “develop 12 new CERT supplemental training modules that promote advanced individual and team skills” and a completion date of September 30, 2009. FEMA officials said that 6 of the 12 modules were completed as of September 30, 2009, and that the spreadsheet should have identified the effort as ongoing because developing the planning modules was to be completed over a 4-year period ending in 2011. The operating plan, spreadsheet, and FEMA officials provided no time frame for when the training is expected to be implemented at the local level to increase the function of individual CERTs, nor did they discuss performance measures and targets for gauging changes in the effectiveness of CERTs, or how local training will be coordinated or delivered. NPD’s operating plan and spreadsheet also did not include other key elements of an effective national strategy, such as how NPD will measure progress in meeting defined goals and objectives and the potential costs and types of investments needed to implement community preparedness programs. As a result, NPD is unable to provide a picture of priorities or how adjustments might be made in view of resource constraints. In our April 2009 report, we recommended that NPD take a more strategic approach to implementing the National Preparedness System to include the development of a strategic plan that contains such key elements as goals, objectives, and how progress in achieving them will be measured. DHS concurred with our recommendation and stated that it is making progress in this area and in fully implementing the recommendation. NPD officials stated in September 2009 that DHS, FEMA, and NPD, in coordination with national security staff, were discussing the development of a preparedness and implementation strategy within the context of Homeland Security Presidential Directive 8 (National Preparedness) (HSPD-8). They said that community and individual preparedness were key elements of those discussions. At that time, NPD officials did not state when the strategy would be completed; thus, it is not clear to what extent the strategy will integrate Citizen Corps, its partner programs, and the Ready Campaign. NPD officials stated that work is under way on revising the target capabilities, which are to include specific outcomes, measures, and resources for the Community Preparedness and Participation capability. They said that they expect to issue a draft for public comment in the second quarter of fiscal year 2010. Also, in testimony before the Subcommittee on Emergency Communications, Preparedness and Response, Committee on Homeland Security, on October 1, 2009, the NPD Deputy Administrator said that, in recognition of the preliminary observations raised in our testimony, NPD is reformulating the NPD operating plan as a strategic plan. He said that once complete, the strategic plan is intended to integrate Community Preparedness, specifically the efforts of Citizen Corps, its partner programs and affiliates, and the Ready Campaign. However, he said he was not prepared to provide a time frame as to when the strategic plan would be completed. The NPD Deputy Administrator agreed to consult with the Subcommittee staff and other stakeholders as NPD develops the draft strategic plan. The FEMA official leading the development of the NPD strategic plan told us that NPD had begun to develop a strategic plan, but it had not developed a timeline with milestone dates for completing it because NPD is waiting to coordinate the plan’s development with the revision of HSPD- 8 and the Quadrennial Homeland Security Review. He said NPD would be better able to establish a timeline and milestones for completing the NPD strategic plan once these other documents were revised; but he was uncertain about when these documents would be completed. He also stated that NPD had developed a draft strategic approach for community preparedness in response to a request by the Chairman of the Subcommittee on Emergency Communications, Preparedness and Response, Committee on Homeland Security, during the October 1, 2009 hearing. He said that NPD intends to use this strategic approach as a vehicle for discussing community preparedness within the context of NPD’s overall strategy. He told us that, as with the draft strategic plan, NPD had not established a timeline with milestone dates for completing the Community Preparedness strategy. On December 2, 2009, FEMA provided a copy of the draft community preparedness strategic approach that it prepared for the Subcommittee. FEMA’s draft represents an important first step because it partially satisfies the elements of an effective national strategy. Specifically, the draft strategic approach broadly discusses why FEMA produced it, the process by which it was developed, and FEMA’s overall community preparedness vision. The draft also outlines goals and subordinate goals and discusses the outcomes FEMA expects in achieving them. However, the draft strategic approach lacks key elements of an effective national strategy because, among other things, it does not discuss how progress will be measured in achieving these goals; the roles and responsibilities of the organizations responsible for implementing the strategy, and mechanisms for coordinating their efforts; and the cost of implementation, including the source and types of resources needed and where those investments and resources should be targeted. FEMA’s draft did not identify a timeline and milestones for completing the strategy. The Ready Campaign is also working to develop its strategic direction. According to the FEMA Director of External Affairs, the Ready Campaign’s strategy is being revised to reflect the transition of the program from DHS’s Office of Public Affairs to FEMA’s Office of External Affairs, and the new FEMA Director’s approach to preparedness. Program officials said that the Ready Campaign will have increased access to staff and resources and is to be guided by a FEMA-wide strategic plan for external communications. As of September 2009, the plan was still being developed and no date had been set for completion. The Ready Campaign Director said in November 2009 that the plan is not expected to be done before the end of the year, but was not aware of a timeline and milestones for its completion. The Director also said that the Ready Campaign was included in the draft community preparedness strategy. We recognize that HSPD-8 and the Quadrennial Homeland Security Review are instrumental in articulating the overall national preparedness strategy and FEMA’s strategic approach, and that NPD’s plan and community preparedness strategies, including the Ready Campaign, are components of efforts to revise these initiatives. Standard practices for project management established by the Project Management Institute state that managing a project involves, among other things, developing a timeline with milestone dates to identify points throughout the project to reassess efforts under way to determine whether project changes are necessary. By developing plans with timelines and milestones for completing the NPD and community preparedness strategies, FEMA will be better positioned to provide a more complete picture of NPD’s approach for developing and completing these documents. They also would provide FEMA managers and other decision makers with insights into (1) NPD’s overall progress in completing these strategies, (2) a basis for determining what, if any, additional actions need to be taken, and (3) the extent to which these strategies can be used as building blocks for the national preparedness strategy and FEMA’s strategic approach. Hurricane Katrina was one of the most devastating natural disasters in our nation’s history and will have long-standing effects for years. By their nature, catastrophic events involve casualties, damage, or disruption that will likely overwhelm state and local responders. Americans who are prepared as individuals for disasters, and as trained volunteers, can help to mitigate the impact of disasters in their local communities, yet the previous FEMA surveys indicate that many Americans are still not prepared. The majority of those responding to the surveys said they plan to rely on assistance from first responders during a major disaster. While FEMA identifies community preparedness as an important part of its national preparedness strategy, FEMA lacks accurate performance information on its community preparedness programs that would enable it to determine whether these programs are operating in the communities in which they have been established. We recognize that FEMA’s Citizen Corps Program and partner programs have relatively small budgets and staff, and that program officials are aware of inaccuracies in the data and are considering options to improve information on local programs, such as re-registering existing programs. However, it is unclear whether these measures will be enough to provide FEMA the assurance it needs that local programs that are registered continue to operate. By having accurate data, FEMA managers and other decision makers would be better positioned to measure progress establishing and maintaining these programs nationwide and in local communities. This would also provide FEMA managers the basis for exploring (1) why programs that no longer exist were disbanded, and (2) possible strategies for reconstituting local programs or developing new ones. Accurate data would also provide a foundation for developing outcome measures that gauge whether local programs are achieving goals associated with enhancing community preparedness. Challenges in measuring the performance of these programs stem in part from FEMA lacking an overall strategy for achieving community preparedness or defining how these efforts align with the larger National Preparedness System, particularly how Citizen Corps, its partner programs, and the Ready Campaign fit within the strategy. Defining program roles, responsibilities, and coordination mechanisms; identifying performance measures to gauge results; and ensuring the resources needed to achieve program goals would be part of an effective strategy. FEMA has agreed such a strategy is needed and has started to develop strategies for NPD and Community Preparedness, including the Ready Campaign, but has no timeframes or milestone dates for developing and completing them. By having a plan with time frames and milestone dates for completing the NPD strategic plan and its community preparedness strategy, FEMA managers and other decision makers would be better equipped to track NPD’s progress. Moreover, they would have a basis to determine what, if any, additional actions are needed to enhance NPD’s overall preparedness strategy and community preparedness and insights into the extent to which these plans can be used as building blocks for the national preparedness strategy and FEMA’s strategic approach. To better ensure that national community preparedness efforts are effective and completed in a timely fashion, we recommend that the Administrator of FEMA take the following two actions: examine the feasibility of developing various approaches for ensuring the accuracy of registration data of local Citizen Corps Councils and partner programs, and develop plans including timelines and milestone dates for completing and implementing (1) NPD’s strategic plan and (2) its Community Preparedness Strategic Approach, including details on how Citizen Corps, partner programs, and the Ready Campaign are to operate within the context of the National Preparedness System. We requested comments on a draft of this report from the Secretary of Homeland Security. The department declined to provide official written comments to include in our report. However, in an e-mail received January 19, 2010, the DHS liaison stated that DHS concurred with our recommendations. FEMA provided written technical comments, which were incorporated into the report as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Homeland Security, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any further questions about this report, please contact me at (202) 512-8777 or email at jenkinswo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. William O. Jenkins, Jr Director, Homeland Security and Ju . The Department of Homeland Security (DHS) support for local community preparedness activities is provided through Homeland Security grants, specifically the Citizen Corps grant program, but community preparedness activities are also eligible for support under other Homeland Security grants. Citizen Corps grants are awarded to states based on a formula of 0.75 percent of the total year’s grant allocation to each state (including the District of Columbia and Commonwealth of Puerto Rico) and 0.25 percent of the total allocation for each U.S. Territory, with the balance of funding being distributed on a population basis. For other DHS homeland security grants, states prepare a request for funding, which can include support for the state’s community preparedness efforts, as allowed under the guidance for a particular grant. For example, the 2009 Homeland Security Grant Program guidance lists “conducting public education and outreach campaigns, including promoting individual, family and business emergency preparedness” as an allowable cost for State Homeland Security Grants. Grant funding can be used to support Citizen Corps, Citizen Corps partner programs, or other state community preparedness priorities. The Federal Emergency Management Agency’s (FEMA) grant reporting database does not categorize grants in a way that allows identification of the amount of funding going to a particular community preparedness program, such as a Community Emergency Response Team (CERT) or Fire Corps. Table 1 summarizes the approximately $269 million in DHS grants that were identified by grantees as supporting community preparedness projects from 2004 through 2008. Our selection of projects for inclusion in this summary relied on DHS data on grantees who identified their project as one of three predefined project types that are, according to FEMA officials, relevant for community preparedness, or were projects funded with a Citizen Corps Program grant. Not all grantees may have used these project-type descriptions, so the amount below is an approximation. We worked with grant officials to identify the most appropriate grant selection criteria. To determine the reliability of these DHS grant data, we reviewed pertinent DHS documents, such as the Grant Reporting Tool User’s Manual and interviewed DHS officials about their process for compiling these data. We determined that the grant data we used were sufficiently reliable for purposes of this report. In addition to the contact named above, John Mortin, Assistant Director, and Monica Kelly, Analyst-in-Charge, managed this assignment. Carla Brown, Qahira El’Amin, Lara Kaskie, Amanda Miller, Cristina Ruggiero- Mendoza, and Janet Temko made significant contributions to the report. | Individuals can reduce their need for first responder assistance by preparing for a disaster. By law, the Federal Emergency Management Agency (FEMA) in the Department of Homeland Security (DHS) is to develop a National Preparedness System (NPS) that includes community preparedness programs. These programs account for less than 0.5 percent of FEMA's budget. They include the Citizen Corps Program (CCP) and partner programs, e.g., Fire Corps, which provide volunteers to assist first responders. FEMA's Ready Campaign promotes preparedness through mass media. GAO was asked to review federal efforts to promote community preparedness. GAO was asked to address (1) challenges, if any, FEMA faces in measuring the performance of CCP, its partner programs, and the Ready Campaign, and (2) actions, if any, FEMA has taken to develop a strategy to encompass how these programs are to operate within the context of the NPS. GAO analyzed documents on preparedness plans and strategies and compared reported performance data with observations during 12 site visits, selected primarily on the basis of major disasters. While not projectable, the results add insight. FEMA faces challenges measuring performance for CCP, its partner programs, and the Ready Campaign because (1) it relies on states to verify data for local program units and (2) it is unable to control the distribution of the Ready Campaign messages or measure whether the messages are changing the behavior of individuals. GAO's past work showed the importance of ensuring that program data are of sufficient quality to document performance and support decision making. FEMA includes the number of local volunteer organizations registered nationwide as its principal performance measure for community preparedness, but does not verify that registration data are accurate. For example, 5 of the 17 registered Citizen Corps councils GAO contacted were not active as councils. FEMA relies on state officials to verify the accuracy of the data, and does not have staff or processes for this purpose. FEMA officials agreed that the data are inaccurate, and have plans to improve the registration process, but this process is not designed to ensure accurate data because states will continue to be responsible for verifying the accuracy of data. FEMA counts requests for literature, Web site hits, and the number of television and radio announcements made to gauge performance of the Ready Campaign, but it does not control when information is accessed or viewed. Also, changes in behavior can be the result of a variety of factors, including campaigns sponsored by other organizations. GAO's past work stated that agencies should measure performance based on accurate, clear, and reliable data that are clearly linked to program goals, but also recognized that programs like the Ready Campaign may need to rely on substitute measures that it uses such as Web site hits. GAO recognizes that FEMA is challenged measuring the performance of CCP, partner programs, and the Ready Campaign, but examining the feasibility of approaches to verify data on CCP and its partner programs could position FEMA to begin to (1) explore why programs that no longer exist were disbanded and (2) develop possible strategies for reconstituting local programs or developing new ones. FEMA's challenges in measuring the performance of community preparedness programs are compounded because it has not developed a strategy to show how its community preparedness programs and the Ready Campaign are to operate within the context of the NPS. In April and October 2009, GAO reported that FEMA's National Preparedness Directorate (NPD), responsible for community preparedness, had not developed a strategic plan; rather it used an operating plan, which lacked key elements of an effective national strategy, such as how to gauge progress. GAO recommended that NPD develop a strategic plan that contains these key elements. FEMA agreed and reported that it is taking actions to strengthen strategic planning. While officials said an NPD strategic plan and a community preparedness strategy are being developed, NPD has not developed timelines with milestone dates for completing these strategies. By doing so, consistent with standard management practices for implementing programs, FEMA would be better positioned to show progress and provide insights into how these plans can be used as building blocks for the national preparedness strategy. |
The Social Security Act of 1935 required most workers in commerce and industry, then about 60 percent of the workforce, to be covered. Amendments to the act in 1950, 1954, and 1956 allowed states, generally acting for their employees, to voluntarily elect Social Security coverage through agreements with SSA. The amendments also permitted states and localities that elected coverage to withdraw from the program after meeting certain conditions. Policymakers have addressed the issue of extending mandatory Social Security coverage for state and local government employees on several occasions. In response to financial problems the Social Security system faced in the early 1970s, for example, the 1977 Social Security amendments directed that a study be made of the desirability and feasibility of extending mandatory coverage to employees at all levels of government, including state and local governments. The Secretary of the Department of Health, Education, and Welfare—now the Departments of Health and Human Services and Education—established the Universal Social Security Coverage Study Group to develop options for mandatory coverage and analyze the fiscal effects of each option. Recognizing the diversity of state and local systems, the study group selected representative plans for analysis. Two data sources were developed and analyzed. First, the Actuarial Education and Research Fund, sponsored by six professional actuarial organizations, established a task force of plan actuaries to study 25 representative large and small noncovered retirement systems. Second, the Urban Institute, under a grant from several government agencies, used an actuarial firm to obtain data on 22 of the largest 50 noncovered employee retirement systems. The study group report, issued in 1980, provided information on the costs and benefits of various options but did not draw conclusions about their relative desirability. In 1983, the Congress removed authority for states and localities that had voluntarily elected Social Security coverage to withdraw from the program, which effectively made coverage mandatory for many state and local employees. Additionally, in 1990, the Congress mandated coverage for state and local employees not covered by public pension plans. SSA estimates that 96 percent of the workforce, including 70 percent of the state and local government workforce, is now covered by Social Security. During 1997, Social Security had $457.7 billion in revenues and $369.1 billion in expenditures. About 89 percent of Social Security’s revenues came from payroll taxes. The Social Security payroll tax is 6.2 percent of pay each for employers and employees, up to an established maximum. Maximum earnings subject to Social Security payroll taxes were $65,400 in 1997 and are $68,400 in 1998. Social Security provides retirement, disability, and survivor benefits to insured workers and their families. Insured workers are eligible for full retirement benefits at age 65 and reduced benefits at age 62. The retirement age was increased by the 1983 Social Security amendments. Beginning with those born in 1938, the age at which full benefits are payable will increase in gradual steps from age 65 to age 67. Benefit amounts are based on a worker’s age and career earnings, are fully indexed for inflation, and as shown in table 1, replace a relatively higher proportion of the final year’s wages for low earners. Social Security provides additional benefits for eligible family members, including spouses aged 62 or older—or younger spouses if a child meeting certain requirements is in their care—and children up to age 18—or older if they are disabled. The amount of a spouse’s or child’s benefit is one-half the insured worker’s age-65 benefit amount. A spouse’s benefit is reduced if taken earlier than age 65, unless the spouse has a child in his or her care. SSA estimates that about 5 million state and local government employees, excluding students and election workers, occupy positions not covered by Social Security. SSA also estimates that the noncovered employees have annual salaries totaling about $132.5 billion. Seven states—California, Colorado, Illinois, Louisiana, Massachusetts, Ohio, and Texas—account for over 75 percent of the noncovered payroll. Based on a 1995 survey of public pension plans, the Public Pension Coordinating Council (PPCC) estimates that police, firefighters, and teachers are more likely to occupy noncovered positions than other employees are. According to a 1994 Bureau of Labor Statistics (BLS) survey, most full-time state and local employees participate in defined benefit pension plans. Table 2 shows membership and contribution rates for nine defined benefit state and local pension plans that we studied as part of the review. For the most part, active members in the nine plans occupy positions that are not covered by Social Security. Defined benefit plans promise a specific level of benefits to their members when they retire. Minimum retirement age and benefits vary; however, the BLS and PPCC surveys indicate that many public employees can retire with full benefits at age 55 or earlier with 30 years of service. The surveys also indicate that plan members typically have a benefit formula that calculates retirement income on the basis of specified benefit rates for each year of service and the members’ average salary over a specified time period—usually the final 3 years. For example, the benefit rates for members of the Colorado Public Employees’ Retirement Association are 2.5 percent of highest average salary per year over a 3-year period for the first 20 years of service and 1.5 percent of highest average salary per year for each additional year of service. Full retirement benefits are available at any age with 35 years of service, at age 55 with 30 years of service, age 60 with 20 years of service, or at age 65 with 5 years of service. Therefore, plan members who retire at age 55 with 30 years of service receive annual retirement income amounting to 65 percent of their highest average salary. Reduced retirement benefits are available, for example, at age 55 with 20 years of service. In addition to retirement income benefits, most public pension plans provide other benefits, such as disability or survivor benefits. For example, BLS reported that of defined benefit plan members, 91 percent were provided with disability benefits, all have a survivor annuity option, and 62 percent receive some cost-of-living increases after retirement. Public pension plan coverage for part-time, seasonal, and temporary employees varies. In Ohio, for example, part-time and temporary state employees participate in a defined benefit plan. In California, the 16,000 part-time, seasonal, and temporary state employees have a defined contribution plan. Plan benefits are based on plan contributions, which consist of 7.5 percent of the employees’ gross pay deducted from their pay and returns on plan investments. SSA estimates that extending mandatory Social Security coverage to all newly hired state and local employees would reduce the trust funds’ 75-year actuarial deficit by about 10 percent. The surplus payroll tax revenues associated with mandatory coverage and interest on that surplus would extend the trust funds’ solvency by about 2 years. Extending mandatory coverage to newly hired employees would also increase program participation and, in the long run, simplify program administration. Table 3 shows SSA’s analysis of the present discounted value of revenues and expenditures with and without mandatory coverage over the 75-year period beginning January 1, 1998. The analysis indicates that extending mandatory coverage to all state and local employees hired beginning January 1, 2000, would reduce the program’s long-term actuarial deficit by 10 percent, from about 2.19 percent of payroll to 1.97 percent of payroll. Figure 1 shows that SSA’s analysis indicates that extending mandatory coverage to new state and local employees would extend the trust funds’ solvency by about 2 years, from 2032 to 2034. As with most other elements of the reform proposals put forward by the 1994-1996 Social Security Advisory Council, extending mandatory coverage to newly hired state and local employees would contribute to the resolution of—but not fully resolve—the trust funds’ solvency problem. A combination of adjustments will be needed to extend the program’s solvency over the entire 75-year period. SSA’s analysis indicates that revenues resulting from an extension of mandatory coverage, including payroll taxes and interest on surplus revenues, would substantially exceed additional expenditures throughout the 75-year period. SSA assumes that payroll tax collections for new employees would accelerate early in the 75-year period, while benefits for those employees would not accelerate until later in the period. For example, annual revenues from payroll taxes collected from the newly covered employees and their employers are expected to exceed expenditures for benefits to those employees until 2050. In that year, however, revenues resulting from an extension of mandatory coverage, including interest on cumulative surplus revenues, are projected to exceed expenditures on those employees by over 300 percent. While Social Security’s solvency problems triggered the analysis of the effect of mandatory coverage on program revenues and expenditures, the inclusion of such coverage in a comprehensive reform package would likely be grounded in other considerations as well, such as broadening Social Security’s coverage and simplifying program administration. an effective Social Security program helps to reduce public costs for relief and assistance, which, in turn, means lower general taxes. There is an element of unfairness in a situation where practically all contribute to Social Security, while a few benefit both directly and indirectly but are excused from contributing to the program. According to SSA, one important way that noncovered employees benefit from, without contributing to, Social Security is that their parents, grandparents, or other relatives receive Social Security’s retirement, disability, or survivor benefits. Social Security is designed as a national intergenerational transfer program where the taxes of current workers fund the benefits of current beneficiaries. SSA stated that those not contributing to the program still receive the benefits of this transfer. Extending mandatory Social Security coverage to all newly hired state and local employees would also simplify program administration by eliminating, over time, the need to administer and enforce special rules for noncovered state and local employees. For example, SSA’s Office of Research, Evaluation, and Statistics estimates that 95 percent of state and local employees occupying noncovered positions become entitled to Social Security as either workers or dependents. Additionally, the Office of the Chief Actuary estimates that 50 to 60 percent of state and local employees in noncovered positions will be fully insured by age 62 from other, covered employment. The Congress established the Windfall Elimination Provision (WEP) and Government Pension Offset (GPO) to reduce the unfair advantage that workers eligible for pension benefits on the basis of noncovered employment may have when they apply for Social Security benefits. The earnings history for workers with noncovered earnings may appear to qualify them for increased Social Security benefits as low-income wage earners—or for additional benefits for a nonworking spouse—when in fact they have had substantial income from noncovered employment. With a few exceptions, WEP and GPO require SSA to use revised formulas to calculate benefits for workers with noncovered employment. In April 1998, we reported that SSA is often unable to determine whether applicants should be subject to WEP or GPO and this has led to overpayments. We estimated total overpayments to be between $160 million and $355 million over the period 1978 to 1995. In response, SSA plans to perform additional computer matches with the Office of Personnel Management and the Internal Revenue Service (IRS) to obtain noncovered pension data and ensure WEP and GPO are correctly applied. Mandatory coverage would reduce required WEP and GPO adjustments to benefits by gradually reducing the number of employees in noncovered employment. Eventually, all state and local employees—with the exception of a few categories of workers, such as students and election workers—would be in covered employment, and adjustments would be unnecessary. In 1995, SSA asked its Office of the Inspector General to review state and local government employers’ compliance with Social Security coverage provisions. In December 1996, the Inspector General reported that Social Security provisions related to coverage of state and local employees are complex and difficult to administer. The report stated that few resources were devoted to training state and local officials and ensuring that administration and enforcement roles and responsibilities are clearly defined. The report concluded that there is a significant risk of sizeable noncompliance with state and local coverage provisions. In response, SSA and IRS have initiated an effort to educate employers and ensure compliance with legal requirements for withholding Social Security payroll taxes. Extending coverage to all newly hired state and local government employees would eventually eliminate this problem. SSA stated that the time needed to fully phase in mandatory coverage could be 20 to 30 years, if it followed estimates of the time needed to phase in Medicare coverage, which was mandated for newly hired state and local employees starting in 1986. SSA also stated that mandatory Social Security coverage for new hires would possibly create another tier in the payroll reporting process resulting in additional compliance issues in the near term. Additionally, payroll practitioners would need to account for Social Security covered and noncovered government employment—along with Medicare covered and noncovered employment—and, as a result, they would face additional reporting burdens in the near term as they extended Social Security coverage to new employees. If Social Security becomes mandatory, all newly hired state and local employees would be provided with the minimum income protection afforded by Social Security. Also, they and their employers would pay Social Security’s combined 12.4-percent payroll tax. Each state and locality with noncovered employees would then decide how to respond to the increase in benefits and costs. Possible responses range from the government’s absorbing the added costs and leaving current pension plans unchanged to entirely eliminating state and local pension plan benefits for newly hired employees. From discussions with state and local representatives, however, noncovered employers would likely adjust their pension plans to reflect Social Security’s benefits and costs. To illustrate the implications of mandatory coverage for public employers and employees, we examined three possible responses: States and localities could maintain similar total retirement benefits for current and newly hired employees. For example, employees who retire before age 62 would be paid supplemental retirement benefits until they become eligible for Social Security benefits. This response would likely result in an increase in total retirement costs and some additional family and other benefits for many newly hired employees. States and localities could examine other pension plans that are already coordinated with Social Security and provide newly hired employees with similar benefits. For example, employees who retire before age 62 would receive, on average, a smaller initial retirement benefit than current noncovered employees. This response would also likely result in an increase in total retirement costs and some additional family and other benefits for newly hired employees. States and localities could maintain level retirement costs. This response would likely require a reduction in pension benefits from the government’s plans for many newly hired employees, but the new employees would also have Social Security benefits. According to pension plan representatives, the changes to current pension plans in response to mandatory coverage could result in reduced contributions to those plans, which could affect their long-term financing. States and localities with noncovered employees could decide to provide newly hired employees with pension benefits at retirement, which, when combined with Social Security benefits, approximate the pension benefits of current employees. Studies indicate that such a decision would likely result in an increase in retirement costs. The amount of increase would vary depending on a number of factors; however, studies indicate the increase could be about 7 percent of new-employee payroll. The 1980 Universal Social Security Coverage Study Group report estimated that total retirement costs, including Social Security payroll taxes and pension plan contributions, would need to increase an average of 5 to 10 percent of payroll to maintain level benefits for current and newly hired employees. However, the estimated increase included the 2.9 percent of payroll Medicare tax that was mandated for all new state and local employees in 1986—6 years after the study was completed. Deducting the Medicare tax reduces the estimate of additional costs to between 2 and 7 percent of payroll. The 1980 study group assumed that most newly hired employees would have salary replacement percentages in their first year of retirement that would be comparable to the salary replacement percentages provided to current employees. For example, employees retiring before age 62 would receive a temporary supplemental pension benefit to more closely maintain the benefits of the current plan. Since Social Security benefits are weighted in favor of families and lower income employees—and because Social Security benefits are fully indexed for inflation, while many pension plans provide limited or no cost-of-living protection—total lifetime benefits for some new employees would be greater than those provided to current employees. More recent studies by pension plan actuaries in Colorado, Illinois, and Ohio also indicate the cost increase would be in the same range. For example, a December 1997 study for a plan in Ohio indicated that providing retirement benefits for new employees that, when added to Social Security benefits, approximate retirement benefits for current employees would require an increase in contributions of 6 to 7 percent of new-employee payroll. A 1997 study for a pension plan in Illinois indicated the increased payments necessary to maintain similar total retirement benefits for current and new employees would be about 6.5 percent of new-employee payroll. Since it would be limited to new employees, the cost increase would be phased in over several years. For example, the cost increase would be about 0.25 percent of total payroll starting the first year, 2.83 percent of total payroll in 10 years, and 6.54 percent of total payroll after all current employees have been replaced. The 1980 study group report stated that the causes of the cost increase cannot be ascribed directly to specific Social Security or pension plan provisions. According to the study, however, among the most important factors contributing to the cost increase are Social Security’s strengthening of cost-of-living protection, provision of substantial additional benefits to some families, and reduction in pension benefit forfeitures occurring when employees move between jobs. The study stated that another contributing factor would be the need for pension plans to provide supplemental benefits to employees, especially police and firefighters, who retire before they begin receiving Social Security benefits at age 62. The study also found that the magnitude of the cost increase would depend on the pension plan’s current benefits. Cost increases would be less for plans that already provide benefits similar to those provided by Social Security because those plans would be able to eliminate duplicate benefits. Maintaining level benefits for noncovered and newly hired employees would require states and localities in redesigning plans for the newly hired employees to adopt benefit formulas that explicitly integrate pension and Social Security benefits. For example, affected states and localities could adopt a benefit formula that offsets a portion of the member’s pension benefit with a specified percentage of the member’s Social Security benefit. This approach is more common in the private sector—where a 1995 BLS survey of large and medium establishments found that about 51 percent of full-time employees had benefits integrated with Social Security—than the public sector, where a survey found that only about 4 percent of full-time employees had pension benefits integrated with Social Security. In the public sector, pension plans for covered employees generally recognize Social Security benefits implicitly by providing their members with lower benefit rates than are provided to noncovered employees. SSA estimates that about 70 percent of the state and local workforce is already covered by Social Security. The 1980 study group examined the impact on retirement costs if states and localities with noncovered employees provide newly hired employees with pension benefits that are similar to the benefits provided to employees who are already covered by Social Security. The study group concluded that implementing such formulas would increase overall retirement costs by 6 to 14 percent of payroll—or about 3 to 11 percent of payroll after deducting the Medicare tax. The study also concluded that for most pension plans, the present value of lifetime benefits for new employees covered by Social Security would be greater than the value of benefits of current noncovered employees. As shown in table 4, our analysis of 1995 PPCC data also indicates that total retirement costs for states and localities covered by Social Security are higher than the costs for noncovered states and localities. PPCC data also indicate that many employees, especially police and firefighters, retire before age 62, when they would first be eligible for Social Security retirement benefits. The data indicate, for example, that police and firefighters in noncovered plans retired, on average, at age 54. The average retirement age of other employees in noncovered plans was age 60. In covered plans, the average retirement age for police and firefighters and other employees was somewhat higher at ages 55 and 62, respectively. Analyses indicate that, initially, the percentage of salary that is replaced by retirement income is smaller for covered employees who retire before they are eligible for Social Security benefits than for noncovered employees. Our analysis of PPCC data indicates, for example, that public pension plans replace about 65 percent of the final average salary of members who retired with 30 years of service and were not covered by Social Security. For members who retired with 30 years of service and were covered by both a pension plan and Social Security, the PPCC data indicate that pension plans replace only about 53 percent of their members’ final average salary. After Social Security benefits begin, however, covered employees generally have higher salary replacement rates. For example, the average salary replacement rates in 1994 were higher for covered state and local employees than for noncovered employees, after they reach age 62 at all salary levels between $15,000 and $65,000. (See table 5.) We did not compare the expected value of total lifetime benefits for covered and noncovered employees because amounts would vary depending on the benefits offered by each plan. The extent to which the experience of states and localities with covered employees can be generalized to those with noncovered employees is limited. According to the 1980 study group report, most public pension plans that coordinated with Social Security did so in the 1950s and 1960s when Social Security benefits and payroll taxes were much smaller. As Social Security benefits grew, pension plan benefits remained basically unchanged. The study stated that, starting in the 1970s, however, rising pension costs caused several large state systems to consider reducing their relatively liberal pension benefits. In the 1980s, for example, California created an alternative set of reduced benefits for general employees to, among other things, reduce the state’s retirement costs. Initially, general employees were permitted to select between the higher costs and benefits of the original plan and the lower costs and benefits of the revised plan. Subsequently, however, newly hired general employees were limited to the reduced benefits. Regardless, the circumstances surrounding the experiences of states with covered employees make it difficult to predict what changes would occur from further extension of coverage. Several employer, employee, and pension plan representatives with whom we spoke stated that spending increases necessary to maintain level retirement income and other benefits would be difficult to achieve. State and pension plan officials noted that spending for retirement benefits must compete for funds with spending for education, law enforcement, and other areas that cannot be readily reduced. For example, Ohio officials noted that the state is having difficulty finding the additional funds for education needed to comply with court ordered changes in school financing. A representative of local government officials in Ohio stated that payroll represents 75 to 80 percent of county budgets, and there is little chance that voters would approve revenue increases needed to maintain level retirement benefits. He stated the more likely options for responding to increased retirement costs were to decrease the number of employees or reduce benefits under state and local pension plans. If states and localities decide to maintain level spending for retirement, they might need to reduce pension benefits under public pension plans for many employees. For example, a June 1997 actuarial evaluation of an Ohio pension plan examined the impact on benefits of mandating Social Security coverage for all employees, assuming no increase in total retirement costs. The study concluded that level spending could be maintained if service retirement benefits were reduced (for example, salary replacement rates for employees retiring with 30 years of service would be reduced from 60.3 percent to 44.1 percent); retiree health benefits were eliminated for both current and future employees; and the funding period of the plan’s unfunded accrued liability was extended from 27 years to 40 years. The study also stated that additional benefit reductions might be needed to maintain level spending if additional investment income was not available to subsidize pension benefits for newly hired employees. States and localities typically use a “reserve funding” approach to finance their pension plans. Under this approach, employers—and frequently employees—make systematic contributions toward funding the benefits earned by active employees. These contributions, together with investment income, are intended to accumulate sufficient assets to cover promised benefits by the time employees retire. However, many public pension plans have unfunded liabilities. The nine plans that we examined, for example, have unfunded accrued liabilities ranging from less than 1 percent to over 30 percent of total liabilities. Unfunded liabilities occur for a number of reasons. For example, public plans generally use actuarial methods and assumptions to calculate required contribution rates. Unfunded liabilities can occur if a plan’s actuarial assumptions do not accurately predict reality. Additionally, retroactive increases in plan benefits can create unfunded liabilities. Unlike private pension plans, the unfunded liabilities of public pension plans are not regulated by the federal government. States or localities determine how and when unfunded liabilities will be financed. Mandatory coverage and the resulting pension plan modifications would likely result in reduced contributions to public pension plans. This would occur because pension plan contributions are directly tied to benefit levels and plan contributions would be reduced to the extent plan benefits are reduced and replaced by Social Security benefits. The impact of reduced contributions on plan finances would depend on the actuarial method and assumptions used by each plan, the adequacy of current plan funding, and other factors. For example, some plan representatives are concerned that efforts to provide adequate retirement income benefits for newly hired employees would affect employers’ willingness or ability to continue amortizing their current plans’ unfunded accrued liabilities at current rates. Actuaries also believe that reducing contributions to current pension plans could adversely affect the liquidity of some plans. In 1997, for example, an Arizona state legislative committee considered closing the state’s defined benefit pension plan to new members and implementing a defined contribution plan. Arizona state employees are already covered by Social Security; however, states and localities faced with mandatory coverage might consider making a similar change to their pension plans. A March 1997 analysis of the proposed change stated that as the number of employees covered by the plan decreased, the amount of contributions flowing into the plan would also decrease. At the same time, the number of members approaching retirement age was increasing and benefit payments were expected to increase. As a result, external cash flow would become increasingly negative over time. The analysis estimated that about 10 years after the plan was closed to new members, benefit payments would exceed contributions by over $1 billion each year. In another 10 years, the annual shortfall would increase to $2 billion. The analysis stated that the large negative external cash flow would require that greater proportions of investment income be used to meet benefit payment requirements. In turn, this would require the pension plan to hold larger proportions of plan assets in cash or lower yielding short-term assets. Once this change in asset allocation occurs, the plan would find it increasingly difficult to achieve the investment returns assumed in current actuarial analyses and employer costs would increase. Mandatory coverage presents several legal and administrative issues, and states and localities with noncovered employees would require several years to design, legislate, and implement changes to current pension plans. Although mandating Social Security coverage for state and local employees could elicit a constitutional challenge, mandatory coverage is likely to be upheld under current U.S. Supreme Court decisions. Several employer, employee, and plan representatives with whom we spoke stated that they believe mandatory Social Security coverage would be unconstitutional and should be challenged in court. However, recent Supreme Court cases have affirmed the authority of the federal government to enact taxes that affect the states and to impose federal requirements governing the states’ relations with their employees. A plan representative suggested that the Supreme Court might now come to a different conclusion. He pointed out that a case upholding federal authority to apply minimum wage and overtime requirements to the states was a 5 to 4 decision and that until then, the Supreme Court had clearly said that applying such requirements to the states was unconstitutional. States and localities also point to several recent Supreme Court decisions that they see as sympathetic to the concept of state sovereignty. However, the facts of these cases are generally distinguishable from the situation that would be presented by mandatory Social Security coverage. Unless the Supreme Court were to reverse itself, which it seldom does, mandatory Social Security coverage of state and local employees is likely to be upheld. Current decisions indicate that mandating such coverage is within the authority of the federal government. The states would require some time to adjust to a mandatory coverage requirement. The federal government required approximately 3 years to enact legislation to implement a new federal employee pension plan after Social Security coverage was mandated for federal employees. The 1980 study group estimated that 4 years would be required for states and localities to redesign pension formulas, legislate changes, adjust budgets, and disseminate information to employers and employees. Our discussions with employer, employee, and pension plan representatives also indicate that up to 4 years would be needed to implement a mandatory coverage decision. They indicated, for example, that developing revised benefit formulas for each affected pension plan would require complex and time-consuming negotiations among state legislatures, state and local budget and personnel offices, and employee representatives. Additionally, constitutional provisions or statutes in some states may prevent employers from reducing benefits for employees once they are hired. Those states would need to immediately enact legislation that would establish a demarcation between current and future employees until decisions were made concerning benefit formulas for new employees who would be covered by Social Security. According to the National Conference of State Legislators, the legislators of seven states, including Texas, meet biennially. Therefore, the initial legislation could require 2 years in those states. In deciding whether to extend mandatory Social Security coverage to state and local employees, policymakers will need to weigh numerous factors. On one hand, the Social Security program would benefit from the decision. The solvency of the trust fund would be extended for 2 years, and the long-term actuarial deficit would be reduced by about 10 percent. Mandatory coverage would also address the fairness issue raised by the advisory council and simplify program administration. However, the implications of mandatory coverage for public employers, employees, and pension plans are mixed. To the extent that employers provide total retirement income benefits to newly hired employees that are similar to current employees, retirement costs would increase. While the increased retirement costs would be phased in over several years, employers and employees would also incur additional near-term costs to develop, legislate, and implement changes to current pension plans. At the same time, Social Security would provide future employees with benefits that are not available, or are available to a lesser extent, under current state and local pension plans. SSA stated that the report generally provides a balanced presentation of the issues to be weighed when considering mandating coverage. SSA provided additional technical comments, which we have incorporated as appropriate. SSA’s comment letter is reprinted in appendix II. We are sending copies of this report to the Commissioners of the Social Security Administration and the Internal Revenue Service and to other interested parties. Copies will also be made available to others on request. If you or your staff have any questions concerning this report, please call me on (202) 512-7215. Other GAO contacts and staff acknowledgments are listed in appendix III. To examine the implications of a decision to extend mandatory coverage to newly hired state and local employees for the Social Security program, we reviewed documents provided by SSA and IRS and held discussions with their staff. We examined SSA estimates concerning the increase in taxable payroll and Social Security revenues and expenditures attributed to extending mandatory coverage to newly hired state and local employees and discussed data sources with SSA officials. We did not assess the validity of SSA’s assumptions. SSA estimates used the intermediate assumptions reported by Social Security’s Board of Trustees in 1998. To examine the implications of mandatory coverage for state and local government employers, employees, and their pension plans, we reviewed the 1980 study by the Universal Social Security Coverage Study Group, which was prepared for the Secretary of Health, Education, and Welfare at that time and transmitted to the Congress in March 1980. We discussed study results with the study’s Deputy Director for Research and examined supporting documents for the study. We also held discussions and reviewed documentation of state and local government employer, employee, or pension plan representatives in the seven states that account for over 75 percent of the noncovered payroll. We examined financial reports for nine state and local retirement systems: the California State Teachers’ Retirement System, the Public Employees’ Retirement Association of Colorado, the Teachers’ Retirement System of the State of Illinois, the Louisiana State Employees’ Retirement System, the Massachusetts State Retirement System, the Massachusetts Teachers’ Contributory Retirement System, the State Teachers Retirement System of Ohio, the Public Employees’ Retirement System of Ohio, and the Teacher Retirement System of Texas. We also identified a number of states that have changed, or have considered changing, plan benefits in ways that are similar to those that might be made by states and localities with noncovered employees in response to mandatory Social Security coverage. We discussed the potential impact on plan finances of changing plan benefits with pension plan representatives in those states and examined study reports provided by them. For example, we contacted representatives of pension plans in Arizona, Kansas, Montana, South Dakota, Vermont, Washington, and West Virginia that have implemented or considered implementing defined contribution plans to replace some or all of the benefits provided by their defined benefit pension plans. Additionally, we reviewed survey reports addressing pension benefits, costs, investment practices, or actuarial valuation methods and assumptions prepared by BLS, PPCC, and the Society of Actuaries. We discussed the implications of mandatory coverage for public pension plans with actuaries at the Office of Personnel Management, the Pension Benefit Guarantee Corporation, the American Academy of Actuaries, and in private practice. To analyze differences between public pension costs and benefits for covered and noncovered state and local employees, we used PPCC survey data. We used the 1995 survey, which covered 1994, because the 1997 survey, which covered 1996, did not include some of the required data. Despite some limitations, the PPCC data are the best available. The data cover 310 pension systems, representing 457 plans and covering 80 percent of the 13.6 million active members in fiscal year 1994. The survey questionnaire was mailed to 800 systems, which were selected from member associations. Due to the nonrandom nature of the sample, no analysis can offer generalizations, nor can confidence intervals be calculated. Nevertheless, the survey describes the costs and benefits of a substantial majority of public pension plan members. For our analysis of PPCC data, we classified pension plans as (1) Social Security covered if 99 percent or more of the members participated in the Social Security program or (2) Social Security noncovered if 1 percent or less of the members participated in the program. We did not adjust cost and contribution rate data to standardize actuarial cost methods and assumptions. State and local governments may have legitimate reasons for choosing various cost methods, and we did not evaluate their choice. To identify potential legal or other problems with implementing mandatory coverage, we reviewed relevant articles and current case law. We conducted our work between September 1997 and May 1998 in accordance with generally accepted government auditing standards. Francis P. Mulvey, Assistant Director, (202) 512-3592 John M. Schaefer, Evaluator-in-Charge Hans Bredfeldt, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined the implications of extending mandatory social security coverage to all newly hired state and local employees, focusing on: (1) the implications of mandatory coverage for the Social Security Program and for public employers, employees, and pension plans; and (2) potential legal or administrative problems associated with implementing mandatory coverage. GAO noted that: (1) the Social Security Administration (SSA) estimates that extending mandatory social security coverage to all newly hired state and local government employees would reduce the program's long-term actuarial deficit by about 10 percent and would extend the trust funds' solvency by about 2 years; (2) in addition to helping to some extent resolve the solvency problem, mandatory coverage would broaden participation in an important national program and simplify program administration; (3) the impact on public employers, employees, and pension plans would depend on how state and local governments with noncovered employees responded to the additional costs and benefits associated with social security coverage; (4) social security retirement benefits are fully protected from inflation and are weighted in favor of families and low-income employees; (5) many public pension plans, on the other hand, permit employees to retire earlier and provide a higher retirement income benefit than social security; (6) those states and localities that decide to maintain benefit levels for new employees consistent with the earlier retirement age and enhanced retirement income benefit would experience increased costs; (7) however, those employees would also have the additional family and other protection provided by social security; (8) alternatively, states and localities that choose to maintain level retirement spending might need to reduce some retirement benefits for newly hired employees; (9) several employer, employee, and plan representatives stated that mandating social security coverage for all new state and local government employees would raise constitutional issues and would be challenged in court; (10) however, GAO believes that mandatory coverage is likely to be upheld under current Supreme Court decisions; (11) mandatory coverage would also present administrative issues for implementing state and local governments; and (12) up to 4 years could be required for states and localities to develop, legislate, and implement pension plans that are coordinated with social security. |
The JTRS program was initiated to exploit advancements in software- defined radio technology and provide battlefield commanders with superior information capabilities. Since its initiation in 1997, the program has experienced cost and schedule overruns and performance shortfalls, due primarily to immature technologies, unstable requirements, and aggressive schedules. In an effort to address these problems, the program was restructured in March of this year. However, due to JTRS’ lengthy development path, DOD has had to continue buying other tactical radios— currently estimated to cost $11 billion—to support its communication needs. Survivability and lethality in warfare are increasingly dependent on smaller, highly mobile, joint forces that rely on superior information and communication capabilities. The single function hardware design of DOD’s existing radio systems lack the functionality and flexibility necessary to achieve and maintain information superiority or to support the rapid mobility and interoperability required by today’s armed forces. To support new operational or mission requirements, DOD determined that the large number and diversity of legacy radios in use would require wholesale replacement or expensive modifications. Software-defined radios such as JTRS primarily use software rather than hardware to control how the radio works and, because they are programmable, JTRS offers significant flexibility to meet a wide variety of needs. Rather than developing radios that are built to different standards and operate on different fixed frequencies, as was the case in the past, JTRS is to be a single, interoperable family of radios based on a common set of standards and applications. The radios are expected to not only satisfy the requirements common to the military’s three operational domains—air, sea, and ground—but be able to communicate directly with many of DOD’s existing tactical radios. To facilitate interoperability, JTRS will develop a set of waveforms (software radio applications) designed with the same operating characteristics as many of DOD’s existing radios. Depending on operational needs, different waveforms could be loaded onto a JTRS radio and used to communicate with a variety of other radios. In addition to supporting interoperability, JTRS is to contribute to DOD’s goal of network centric warfare operations by introducing new wideband networking waveforms that dramatically increase the amount of data and speed at which the data can be transmitted. As such, the waveforms would facilitate the use of maps, imagery, and video to support the decision- making of tactical commanders at all echelons. Table 1 compares the frequency band, nominal channel bandwidth, and data rates of selected legacy waveforms and new wideband waveforms. In addition to providing new wideband waveforms, individual JTRS radios would have the capability to support multiple services (e.g., voice, data, and video) and operate on multiple channels simultaneously. For example, a four-channel JTRS radio set intended for a ground vehicle could be programmed to have channels dedicated to SINCGARS, Have Quick, the Wideband Networking Waveform, and the Soldier Radio Waveform. All four channels could be operating simultaneously. Data could also be transferred from one channel (or network) to another through a “gateway” device implemented with hardware and software. Figure 1 depicts the JTRS operational overview. Developing JTRS is a significant challenge. Because JTRS is intended to operate on the battlefield where there is no fixed infrastructure of cell towers, fiber optic lines, and other network components such as routers and switches, the radios must be powerful enough to transmit and relay information wirelessly over long distances, maintain network linkages and quality of service while on the move, and ensure that communications and the network itself are secure. Development of the individual waveforms and their ability to function effectively on different JTRS sets is critical to the success of JTRS. The Wideband Networking Waveform, for example, will require complex software development and include over 1.6 million lines of software code. To ensure the waveforms perform as intended, they will go through a rigorous certification process—one that involves testing the functionality, portability, interoperability, and security aspects of the waveforms when operating on production representative JTRS radios. To manage JTRS’ development, DOD established a Joint Program Office and service-led program/product offices in the late 1990s. Table 2 summarizes the general management structure of JTRS until 2005, at which time it was changed. Achieving JTRS’ technical requirements has proven to be a significant challenge. In 2001, an independent assessment of the program identified numerous concerns, including the program’s aggressive acquisition approach and schedule, unstable requirements, and an ambiguous management decision chain. In our reviews, we have found similar problems. For example, the JTRS Cluster 1 program—which includes development of the Wideband Networking Waveform, the waveform intended to serve as the main conduit of information among Army tactical units—began development with an aggressive schedule, immature technologies, and a lack of clearly defined and stable requirements. As a result, the program struggled to mature and integrate key technologies and was forced to make major design changes. These factors contributed to significant cost and schedule problems that led DOD to stop key development work and propose restructuring the program. Meeting requirements for JTRS Cluster 5 radios proved even more challenging, given the radios’ smaller size, weight, and power needs. Several programmatic changes and a contract award bid protest also slowed progress of the Cluster 5 program. Subsequent to our reporting that the JTRS program lacked a strong, joint- management structure for resolving requirements and funding differences among the services, Congress directed DOD to develop a plan for managing JTRS’s development under a single joint program office. Under DOD’s plan, all JTRS programs were realigned under the authority of a single JTRS Joint Program Executive Officer, established within the Navy’s Space and Naval Warfare Systems Command. The JPEO’s assessment of the JTRS program revealed that the program evolved from a legacy radio replacement program to a network centric radio program without a re-baselining of program impacts; requirements significantly changed and never stabilized; the complexity of information assurance/security problems were not anticipated; and the program was executing at high technical, schedule, and cost risk. To get the program on track, JPEO was directed by DOD to come up with a proposal for JTRS that addressed the services’ priority requirements, was technically doable, and could be executed within a reasonable budget. In November 2005, JPEO presented three program options reflecting different sets of capabilities and development costs (see table 3). Each option included a specific mix of form factors and waveforms for each of the services. For example, under option 3, the Marine Corps would use 4-channel vehicle radios, capable of operating two or three legacy waveforms and up to three new waveforms, as well as 2-channel manpack radios operating one or two legacy waveforms and one or two new waveforms. The Army would also use 4-channel vehicle radios, but with the capability to operate three or four legacy waveforms and two or three new waveforms, as well as 1-channel and 2-channel radios operating the Soldier Radio Waveform for its sensors and weapons systems. The Air Force would use a 4-channel Multifunctional Information Distribution System terminal form factor operating two legacy waveforms and one new waveform. The Navy would use the same form factor, operating one legacy and one new waveform, as well as a ship form factor operating one new waveform. DOD selected option 3, which establishes a priority for developing a networking capability mainly through the introduction of transformational wideband waveforms. Since future JTRS capabilities are still planned, option 3 also reflects an incremental approach to developing full JTRS capabilities. The initial option 3 increment is referred to as JTRS Increment 1. To implement JTRS Increment 1, JPEO established a new organizational structure for JTRS that includes three domains and a program for “special radios.” Table 4 summarizes the general management structure of JTRS after the program restructuring. It is likely that the first users of operational JTRS radios will be Navy F/A-18 aircraft equipped with Multifunctional Information Distribution System-JTRS radios and Special Operations Forces using the JTRS Enhanced Multi-Band Intra-Team Radio. Initially, the radios will operate legacy waveforms only. From the start of JTRS development through the end of this year, DOD estimates that $11 billion has been required to buy other radio systems. Of this total amount, $1.3 billion has been used in fiscal years 2005 and 2006 to procure SINCGARS radios to meet urgent operational needs in Iraq and Afghanistan. In addition, because of delays in the development of JTRS, several users depending on JTRS have had to make adjustments and procure interim radios to meet operational needs. For example, the Army is in the process of procuring radios for several of its existing helicopter platforms and a new development effort to outfit and equip individual ground soldiers. Since JTRS development will require at least several more years, it is likely that the estimated $11 billion investment in legacy radios will continue to grow. Table 5 shows the annual procurement amounts for radio systems other than JTRS from 1998 through 2006. The proposed JTRS restructuring approach appears to address past concerns with the program that GAO and others have documented in recent years. While still meeting the needs of key users such as Future Combat Systems, the revised approach is expected to develop and field capabilities in increments rather than attempting to develop and field the capabilities all at once. Costly and non-transformational requirements will be deferred to later increments. In addition, through the establishment of the JPEO and other structural changes, JTRS program management has been strengthened and has become more centralized. The centralized management structure should help the program control development costs and improve oversight through the coordination of standards, system engineering, and development of the radios and waveforms. A new governance structure is also expected to help ensure appropriate oversight and establish clear lines of accountability while, according to JPEO officials, the establishment of an information repository is expected to help facilitate the delivery of waveform and operating system software to the radios’ hardware developers. These efforts, if carried out, should help the restructured JTRS program address previous cost, schedule and performance problems. Table 6 summarizes the significant changes to the JTRS program as a result of the restructuring. A central feature of the JTRS program restructuring is its evolutionary acquisition approach. The program plans to develop capabilities in increments rather than attempt to field a complete capability all at once, which was the previous approach. Specifically, the program plans to defer or reduce costly and non-transformational requirements to later increments. At the same time, the approach prioritizes the development of networking capabilities, primarily through the development of three networking waveforms, and the ability to interoperate with key legacy radios. These capabilities are critical to key dependent users such as the Army’s Future Combat Systems and the implementation of DOD’s vision of network centric operations warfare. Program officials noted several requirements that were reduced or deferred from the previous program to later increments: Reduced number of waveforms: The number of waveforms to be delivered for the first increment has been reduced from 32 to 11. The waveforms deferred to later increments are all non-networking legacy waveforms. Reducing the number of waveforms allows the program to focus the initial JTRS increment on developing and testing the critical networking waveforms as well as some of the more commonly-used legacy waveforms. In addition, the smaller number of waveforms reduces porting efforts. Reduced number of radio variants: The number of variants to be delivered for the first increment has been reduced from 26 to 13. For example, only 9 of the 15 small form radios will be developed for the first increment. Reducing the number of variants provides relief in the hardware design and platform integration work, allowing the program to focus the initial JTRS increment on developing the variants most critical to key dependent users such as the Future Combat Systems. Reduced number of waveform combinations per radio variant: The original intent of JTRS was that most waveforms would operate on most radio variants. However, DOD determined that porting 32 different waveforms onto 26 different variants would have been an immense and costly undertaking. In addition, operating numerous waveforms simultaneously on a JTRS radio would have substantially increased power demands. By reducing the number of waveform combinations per variant, program officials expect to both reduce porting efforts and more easily meet size, weight, and power requirements on some variants. In addition, program officials expect that reducing the number of waveforms operating on each radio will help to mitigate interference. Interim solutions for network interoperability: To achieve DOD’s desired networking capabilities, the waveforms must be able to interoperate reliably and securely with each other. The optimal solution is to have this functionality performed inside JTRS radios as it reduces the overall footprint of the communication network. However, technologies and radio designs are not mature enough at this point to develop an interoperability capability that would function inside individual JTRS radios. Thus, for the initial increment, interoperability between the waveforms may be facilitated by developing gateway devices that reside outside of the JTRS radio. This should help developers mitigate integration challenges. While lesser capabilities will be delivered in the first increment, the program could still significantly enhance current communications and networking capabilities through the development of the networking waveforms and the ability to interoperate with the more commonly used legacy radios, such as SINCGARS. The incremental approach should also make the program more achievable by allowing more time to develop and test key technologies. Figure 2 shows the impact of the expanded schedule on Increment 1 product milestones. Despite the lengthened schedule, the program schedules are still intended to address the needs of key users depending on JTRS such as Future Combat Systems. With the creation of the JPEO, DOD has established a stronger, more centralized joint management structure. Under the new management structure, all JTRS domains—Ground; Airborne, Maritime, and Fixed Site; Special Radios, and Network Enterprise—report directly to the JPEO while the JPEO reports directly to the Under Secretary of Defense for Acquisition, Technology, and Logistics. As such, the JPEO controls development funding and has full directive authority over standards, systems engineering, and developing radios and waveforms. Such a comprehensive authority did not exist in the previous structure where authority was fragmented and the domains reported to individual service executives. Also, to facilitate more effective management, the JPEO has realigned various components of JTRS. Cluster 1 and Cluster 5 are now combined under the new Ground domain while the waveform development was placed under the newly established Network Enterprise domain. Meanwhile, DOD removed the Army helicopter requirements from the JTRS Cluster 1 program and transferred them to the Airborne, Maritime, and Fixed Site domain. With the new strengthened management structure, the program is in a better position to manage requirements growth and control costs. A key feature of the new management structure is the new governance model which aims to streamline decision making and empower the oversight capacity of the JPEO. We previously reported that the existing management structure had been unable to get the services to reach agreement over new and changing requirements expeditiously. Under the prior structure, key decisions were made by consensus, which made it difficult to resolve interservice differences involving requirements and funding. This resulted in a lengthy decision-making process. Under the revised governance structure, stakeholder disagreements are elevated to and decided by a JTRS Executive Council and later by a JTRS Board of Directors, if necessary. The purpose is to make the acquisition process timelier, provide appropriate oversight, and establish clear lines of accountability. Another key feature of the new management structure, according to JPEO, is ensuring greater information sharing among JTRS components takes place to better facilitate the delivery of waveform and operating system software to the radios’ hardware developers. JTRS components depend on several software developers to deliver the required waveforms to their programs for integration onto their particular radios. This involves transfers of complex contractor-owned software code. Because waveforms are such an integral part of the radio’s functionality, delaying their integration onto the radio’s hardware could have a ripple effect on a radio’s overall development. To mitigate this risk, the JPEO has established an information repository where waveform developers would place their waveform software code for the purposes of information sharing. Operating system software code—critical to ensuring the development of a common software architecture—would also be placed in the repository. DOD intends to hold government-purpose rights to all of the software specifications in the repository, so that no single contractor will have complete control over JTRS software development. While the information repository is new and its usefulness is yet to be determined, it remains to be seen whether and to what extent the contractors will be willing to share their software code. If successful, the JPEO expects that information sharing will not only make available software code to hardware developers more timely, but will also contribute to technology innovation as developers attempt to enhance existing software code. In addition, because the software will be shared with many different vendors, the JPEO expects to enhance competition among hardware developers. Furthermore, it is hoped that by developers reusing the same software code in the information repository, waveforms will be more standardized, cutting down on development and integration costs. While the restructuring appears to place JTRS in a better position to succeed, several management and technical challenges remain. The JPEO must first finalize the details of the restructuring, including completing formal acquisition strategies, independent cost estimates, and test and evaluation plans. DOD also needs to revise the Concept of Operations so that it effectively describes how JTRS networking capabilities will be used. Completing and obtaining DOD’s approval of these activities is needed to ensure the program is executable. Over the longer term, the program faces key management and technical challenges that must be overcome. For example, although the new joint management structure for JTRS is a significant improvement over the previous fragmented program management structure, joint development efforts in DOD have often been hampered by an inability to obtain and sustain commitments and support from the military services and other stakeholders. Regarding technical challenges, developing waveforms and porting them to radio hardware is a complex and lengthy undertaking. The proposed interim technical solutions enabling network interoperability have also yet to be developed. In addition, operating in a networked environment open to a large number of potential users has generated an unprecedented need for information assurance. This need has resulted in a lengthy, technically challenging, and still evolving certification process from the National Security Agency. Moreover, integrating the radio’s hardware onto diverse platforms and meeting respective size, weight, and power limitations has been a long- term challenge and remains so. According to program officials, efforts to complete the restructuring have taken time and delays have occurred in gaining approval to go forward. As such, important details of the restructuring have yet to be finalized. This includes completing acquisition strategies, independent cost estimates, test plans, and obtaining final approval of an amended operational requirements document. These activities are currently in the process of being completed. However, until each of these activities is completed and DOD ensures that requirements are firm, acquisition strategies are knowledge-based, cost estimates are realistic, and test plans provide insight into the achievement of the networking capability priorities, there will be uncertainty as to whether the JTRS program, as restructured, is executable. Operational Requirements Document: An Operational Requirements Document contains the requirements and operational parameters for a system. The most recent JTRS Operational Requirements Document was approved in April 2003. To reflect the restructured approach of achieving JTRS requirements incrementally, it was necessary to develop an amendment to the April 2003 Operational Requirements Document. The process to develop the amendment has been led by the Joint Staff and involved input from the requirements community, the services, and other stakeholders; the Joint Requirements Oversight Council has provided oversight of the process. Through the process of developing the Operational Requirements Document amendment, some ‘”gaps” in requirements have been identified by some stakeholders. In particular, the proposed amendment to the Operational Requirements Document includes a requirement for certain JTRS sets to be able to interface with a new satellite system called the Mobile User Objective System. Some stakeholders, however, have identified a need for the manpack and handheld radios to also have this capability. According to agency officials, if the capabilities are deferred to later increments, then the Mobile User Objective System will have to consider options other than JTRS to meet its terminal requirements. Also, according to agency officials, the amendment process is nearly complete. The amendment to the JTRS Operational Requirements Document is awaiting final approval from the Vice Chairman of the Joint Chiefs of Staff. Acquisition strategies: Individual acquisition strategies need to be developed for each JTRS component. An acquisition strategy outlines the business and technical management approach to achieve program objectives within the constraints imposed by resources. A well–developed knowledge-based strategy minimizes the time and cost required to satisfy approved capability needs, and maximizes affordability throughout the program lifecycle. Until the acquisition strategies are complete, there is less assurance that a well-developed and executable approach is in place. This could affect program cost estimates, and fielding plans. Furthermore, an acquisition strategy serves as the basis for other important activities such as testing plans and contract negotiations. As such, any delay in the acquisition strategy could have a ripple effect and delay these activities. Test and evaluation plans: Plans for the overall structure and objectives of the test and evaluation program are also under development and need to be completed. Given the radio’s unprecedented performance capabilities and technical complexity, it is critical that a well-developed test and evaluation plan be developed. Not only is the testing of individual radio components important, but testing the network with sufficient scale is critical to demonstrating transformational capabilities. At this point, it is not clear how DOD plans to test the entire JTRS network including interoperability between all the networking waveforms. The Director of Operational Test and Evaluation recommended that the Army develop a test and evaluation strategy that supports an evaluation of network maturity as part of Future Combat Systems’ production. In addition to the activities that JPEO needs to accomplish to finalize the restructuring, DOD also needs to complete its revisions to the Concept of Operations and determine transition and fielding plans for JTRS. According to JPEO officials, when JTRS development was first initiated, DOD envisioned replacing virtually all legacy radios with JTRS sets. Since then, there has been an evolution of thinking in DOD about networked operations. Although the Concept of Operations for JTRS has gone through several iterations, according to JPEO the current version does not effectively provide a joint vision of how JTRS networking capabilities will be used. How JTRS radios will be used may also be affected by the large increase recently in fielding thousands of newer versions of legacy radios. The recent fielding of so many new legacy radios to the current force may call into question the affordability of replacing them prematurely with JTRS sets. If sufficient detail is not provided by the Concept of Operations, then JTRS development efforts may be inadequate and operational goals may be unfulfilled. Moreover, if migration and fielding plans are not driven by an effective Concept of Operations, the production costs and quantities for JTRS may need to be adjusted. DOD has historically had difficulty managing joint programs primarily because of inter-service differences involving funding and requirements. To succeed, the new JTRS management structure will have to meet those challenges. According to DOD officials, obtaining the necessary resources to execute JTRS development will be one such challenge. The proposed funding arrangement is for the services to individually request and secure development funding that then gets rolled into a centralized account under JPEO control. As currently proposed, each service will fund equal shares unless there are service-unique development efforts, which would be funded by the proponent services. The services will also be required to fund the integration of the radio into their respective platforms. Some agency officials expressed concern whether the services would have the budget capacity to fund integration once the radio sets were available for installation. Stakeholders also need to come to agreement on requirements by obtaining final approval of the amended Operational Requirements Document. If requirements are not thoroughly vetted through the various stakeholders and agreed upon, there is greater risk of future requirements growth or decreased stakeholder support for the program. Regarding JPEO’s new governance model, the decision-making model is untested. The JPEO expects the system development decision for the JTRS Airborne, Maritime, and Fixed Site product line to be decided through the new governance structure. While the program has reported making progress in maturing technologies and stabilizing system designs, several technical challenges must still be overcome to achieve program success. The development of waveforms—particularly the networking waveforms—remains a technically challenging and lengthy effort. This effort involves complex software development and integration work by contractors as well as oversight by the government through a series of rigorous tests and certifications from various authorities, including the JTRS Technology Laboratory, National Security Agency, and the Joint Interoperability Test Command. If waveforms are not available as planned, potential schedule delays or performance impacts could occur to key dependent users, particularly the Future Combat Systems. The JTRS program began with the assumption that the Wideband Networking Waveform would meet the networking waveform needs for all the services. However, the program underestimated the complexity of meeting the Wideband Networking Waveform requirements and the services’ needs within the size, weight, and power constraints of the various user platforms. As a result, DOD began developing two additional networking waveforms to address specialized capabilities. The Soldier Radio Waveform is being designed for radios with severe size, weight, and power constraints such as the handheld, manpack, and small form radios. The Joint Airborne Network-Tactical Edge waveform is being designed to better enable time critical airborne operations. The networking waveforms are the core of the JTRS networking capability and their availability is crucial to the program’s success. The three networking waveforms are in various stages of development: Wideband Networking Waveform: The Wideband Networking Waveform— designed for JTRS ground vehicle radios—is the farthest along in development of the three networking waveforms. Nevertheless, while initial functionality has been demonstrated through a contractor demonstration held in the summer of 2005, some technical challenges remain. The demonstration showed that ground mobile radios operated in a network with the Wideband Networking Waveform and were able to connect to the network as well as reconnected when the network was disrupted. However, the Wideband Networking Waveform also experienced various performance problems including limited data throughput, latency, and start-up time. Program officials believe these performance problems have largely been corrected. Nonetheless, the demonstrated network linked only 4 users, far fewer than the required 250. In addition, program officials noted that meeting the Wideband Networking Waveform requirement for voice communications over a mobile ad hoc network remains challenging. Soldier Radio Waveform: The Soldier Radio Waveform is a low power, short range networking waveform optimized for radios with severe size, weight, and power constraints such as dismounted soldier radios and small form radios. Currently, the waveform is transitioning from a science and technology program. Program officials expect to award a sole source contract in fiscal year 2007 for further development of the waveform. While the Soldier Radio Waveform has demonstrated some functionality, program officials noted that it will take significant effort to transition the waveform from a science and technology project to meet full operational requirements. In particular, program officials are concerned about the waveform’s insufficient security architecture and how this may affect porting it onto a JTRS radio. Given these concerns, the waveform’s development schedule may be ambitious. Future Combat Systems is the driver of near-term Soldier Radio Waveform requirements. The success of the first spin-out of Future Combat Systems is dependent on the delivery of the certified waveform ported to selected JTRS small form radios. Joint Airborne Network—Tactical Edge: The Joint Airborne Network— Tactical Edge is an extremely low latency networking waveform optimized for airborne platforms. Like the Soldier Radio Waveform, the Joint Airborne Network—Tactical Edge is transitioning from a science and technology project and program officials expect to award a sole source contract in fiscal year 2006. For Increment 1, the waveform will initially operate on a Multifunctional Information Distribution System—JTRS radio and will have limited capabilities. Program officials expect that it will be upgraded to full networking functionality in subsequent increments. After waveforms are developed, they must be ported to radio hardware. According to agency officials, porting waveforms onto JTRS radios has been more technically challenging than originally expected. The intent of JTRS is that waveforms be highly portable meaning that waveforms can be transported and adapted to a variety of radio platforms at a cost lower than the cost of redeveloping the waveform again for a radio set with different hardware components. When waveforms are developed, the software code is designed to operate on a particular radio’s hardware architecture. When the same waveform is transported to different hardware, changes to the software code may be necessary to ensure proper integration of the waveform onto the new hardware. The more costly the integration effort is, the less portable the waveform. Although the JTRS Software Communications Architecture specifies design rules for waveform software to enhance portability across different hardware, the limited experience of porting waveforms thus far has shown significantly higher costs and longer schedules than anticipated. The JPEO noted that government direction and oversight as well as coordination between waveform, operating environment, and hardware developers needs improvement. Officials are also concerned about the porting of the networking waveforms being developed in science and technology programs to meet the full requirements for the Soldier Radio Waveform and the Joint Airborne Network-Tactical Edge waveform. To make this happen the waveforms will need to become compliant with the JTRS Software Communications Architecture, incorporate network management functions, and develop required security capabilities. Efforts to rework software to effectively transfer the waveforms, therefore, could result in cost and schedule problems. The proposed interim technical solutions enabling network interoperability have yet to be developed. To achieve DOD’s desired networking capabilities, waveforms must be able to communicate and interoperate with each other. However, technologies and radio designs are not mature enough at this point to develop an interoperability capability that would function inside individual JTRS radios. As a result, the program plans to meet network interoperability requirements for the initial increment through the use of gateways. A gateway is a separate node within a network equipped to interoperate with another network that uses different protocols. As such, key functions facilitating interoperability between waveforms may be performed outside of the JTRS radio rather than inside. At this point, the JPEO is assessing different options to achieve the gateway function and anticipates that development will start in 2007. The JPEO expects that the development of the gateway will result in a separate acquisition decision but is uncertain as to whether it will be acquired through the forthcoming Airborne, Maritime, Fixed Site system development contract or through a separate contract. In addition, the JPEO is uncertain as to whether the gateway will be employed as a separate piece of hardware or whether it will leverage an existing radio in the network. According to JPEO officials, employing the gateway as a separate piece of hardware could result in additional size, weight, and power risks for some platforms. JPEO officials also noted that without a fully functioning gateway capability, users operating in separate networks will not be able to communicate directly with one another. For example, a ground soldier operating on a Soldier Radio Waveform with a handheld radio would not be able to call directly for fire support from an aircraft operating on the Joint Airborne Network—Tactical Edge Waveform with a Multifunctional Information Distribution System-JTRS radio. Integrating the radio’s hardware onto diverse platforms and meeting their respective size, weight and power limitations remains a challenge. To realize full networking capabilities, the radios require significant amounts of memory and processing power, which add to the size, weight, and power consumption of the radio. The added size and weight are the result of efforts to ensure electronic parts in the radio are not overheated. While progress has been made in meeting the size, weight, and power requirements for the ground mobile radios, developers still face some challenges. The JPEO has already delivered 30 partially functioning prototype radios—built on production assembly lines—to the Future Combat Systems program. However, until the ground mobile radios demonstrate greater Wideband Networking Waveform functionality—a key source of power consumption—using a fully functioning prototype, size, weight, and power concerns remain. The delivery of new power amplifiers that are currently being developed as part of a science and technology program by the Army’s Communications—Electronics Research, Development and Engineering Center could help address these concerns. According to center officials, the power amplifiers are approaching maturity and have demonstrated significantly higher power output and improved efficiency over the current amplifier used on the ground mobile radios. The JPEO expects to begin receiving the new power amplifiers this September. Meeting the requirements of the handheld, manpack, and small form radios continues to be the most challenging of all JTRS components because of their smaller size, weight, and power constraints. Program officials expect that the requirements relief provided through the restructuring should help to address size, weight, and power requirements. For example, the restructuring reduces the number of waveforms required to operate on each radio, which is expected to reduce power demands, thereby reducing the size and weight demands. In addition, like the ground mobile radios, the JTRS small form radios are also expected to benefit from the delivery of new wideband power amplifiers. However, these technologies are still maturing. Moreover, the handheld, manpack, and small form radio designs are not stable. The JTRS requirement to operate applications at multiple levels of security in a networked environment has resulted in significant information assurance challenges. Developers not only have to be concerned with traditional radio security issues but also must be prepared to implement the features required for computer and network security. One challenge is that military software defined radio technology capable of processing data at multiple security levels is immature. In addition, the requirement to operate in an open networked environment allows greater access to external networks increasing the number of potential users and the likelihood of threats to the network. These challenges will require the development of new technologies, obtaining certification through a rigorous process by the National Security Agency, and accommodating an expected growth in security requirements. The complexities and uncertainties involved with JTRS security certification were illustrated when the National Security Agency determined that the design for the Cluster 1 radio was not sufficient to meet newly identified operational requirements from the Office of the Secretary of Defense to operate in a networked environment. This resulted in the need for additional security requirements and significant hardware design changes to the radio’s security architecture that ultimately resulted in significant cost increases. National Security Agency officials noted that one of the key lessons learned from the Cluster 1 experience was that security requirements need to be considered early in the development of the radio. As such, the JPEO has taken steps to better coordinate with the National Security Agency to meet security requirements. Specifically, the National Security Agency currently has a representative in each JTRS domain and participates in management reviews, design reviews, vendor technical exchanges, and weekly conference calls. The National Security Agency is also expected to be a member of the JTRS Executive Council and advisory member of the JTRS Board of Directors in the new JTRS governance structure. Both National Security Agency and JPEO officials noted that coordination and cooperation between the agencies has significantly improved since the JPEO was established. In addition, National Security Agency officials do not expect the other JTRS radios will encounter the same design problems experienced by the Cluster 1 radio as contractors now have a greater understanding of security requirements. Further, the restructured schedules for Ground domain radios appear to be sufficiently aligned to receive National Security Agency certification in time to meet the needs of Future Combat Systems. Nevertheless, because of the complex software encryption and networking requirements, security will continue to be a challenge for all JTRS components. JTRS radios will require considerable radio spectrum for effective operations especially when using the new networking waveforms that could operate within several different bands of radio spectrum. However, obtaining sufficient radio spectrum allocations is problematic because the program must compete with other military and civilian users. Radio spectrum in general is becoming more saturated and demand for spectrum is increasing. Efforts are underway by the JPEO to work through the required DOD spectrum certification processes; however, certification of software defined radios remains a challenge because, according to spectrum management officials, these processes were designed around hardware-based radios and may not fully support the certification of cutting edge technologies such as JTRS. DOD has recognized the shortcomings of the existing processes and has taken initial steps to address them. Most recently, DOD has worked with the National Telecommunications and Information Administration to stand up a permanent software defined radio working group that would study how to proceed. U.S. military forces’ communications and networking systems currently lack the interoperability and capacity DOD believes are needed to access and share real-time information, identify and react quickly to threats, and operate effectively as a joint force. JTRS is critical to providing the capabilities to support DOD’s future vision of net-centric warfighting. Yet, since its inception, the JTRS development effort has struggled due to unrealistic cost, schedule, and performance expectations. As a consequence, DOD and the military services have had to make adjustments and acquire interim communications solutions to meet their near-term communications requirements. The restructuring approach developed by JPEO and approved by DOD holds promise for delivering much needed communications capability to the warfighters. However, given the program’s troubled development history, putting the approach into action will be a challenge and require strong and continuous oversight. Key details of the JTRS restructuring— including assurance that there are stable operational requirements, knowledge-based acquisition strategies for each domain’s product lines, and effective test plans that reflect the priority of developing networking capabilities—must be finalized and approved by DOD. In addition, significant programmatic and technical risks—including further technology maturation, certification of waveforms and radios, and implementation of the new JTRS governance model—must still be overcome. Furthermore, detailed migration and fielding plans that are consistent with a well-developed concept of operations are needed to ensure an affordable and operationally effective use of JTRS radios in the future. Any manifestations of these risks will likely increase program costs, delay fielding, or reduce planned capabilities. To the extent JTRS delivers less capability than planned, future warfighting concepts may have to be altered as well as the design of weapons systems such as Future Combat Systems that are dependent on JTRS. To enhance the likelihood of success of the JTRS program, we recommend that the Secretary of Defense: before approving the detailed program plans for each JTRS domain, ensure that they reflect stable and well-defined requirements; knowledge- based acquisition strategies; clear and meaningful test plans that address the need to not only test individual JTRS components but the overall networking capabilities of JTRS as well; and, funding commitments necessary to execute the program; and develop JTRS migration and fielding plans that are consistent with a well- developed concept of operations for using JTRS networking capabilities and effectively balances recent investments in acquiring legacy radios with future needs. In its letter commenting on the draft of our report, DOD agreed with our recommendations. DOD’s letter is reprinted in appendix II. DOD noted that the report recommendations are consistent with the measures taken by the department to restructure the JTRS program, develop JTRS radios in an incremental manner, and effectively balance recent investments in legacy radios with future needs. While we acknowledge that DOD has taken measures to put the JTRS program in a better position to move forward, we continue to believe that additional measures, as outlined in our recommendations, are needed to ensure that the program will be successfully executed and achieve its intended objectives. DOD also provided detailed comments, which we incorporated where appropriate. We are sending copies of this report to the Chairmen and Ranking Minority Members of other Senate and House committees and subcommittees that have jurisdiction and oversight responsibilities for DOD. We will also send copies to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; and the Director, Office of Management and Budget. Copies will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841, or Assistant Director John Oppenheim at (202) 512-4841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. To assess whether recent actions taken by DOD puts the JTRS program in a better position to succeed, we obtained briefings on restructuring assessments, plans, and decisions, analyzed documents describing Increment 1 requirements, and interviewed program and product officials from the Joint Program Executive Office, San Diego, California. To obtain the perspective of organizations that provide policy guidance, oversight, and technology support for the JTRS program, we interviewed officials from the Office of the Under Secretary of Defense, Acquisition, Technology, and Logistics, Arlington, Virginia; Office of the Under Secretary of Defense, Comptroller, Arlington, Virginia; Assistant Secretary of Defense, Networks and Information Integration, Arlington, Virginia; Office of the Director, Operational Test and Evaluation, Arlington, Virginia; Assistant Secretary of the Army for Acquisition, Logistics, and Technology, Arlington, Virginia; and, the Army’s Communications- Electronics Research, Development and Engineering Center, Fort Monmouth, New Jersey. To identify the risks that might continue to undermine the successful fielding of JTRS, we obtained and analyzed briefings from the JTRS Domain, Program, and Product Managers, as well as the JTRS Technical Director, San Diego, California. We also reviewed Selected Acquisition Reports, budget requests, acquisition decision memorandums, and the JTRS governance structure. We interviewed officials from the National Security Agency, Fort Meade, Maryland; Joint Interoperability Test Command, Fort Huachuca, Arizona; Defense Contract Management Agency, Anaheim, California; Project Manager for Future Combat Systems Network Systems Integration, Fort Monmouth, New Jersey; and, JTRS contractors in Arlington, Virginia and Anaheim, California. Our review was conducted from August 2005 through August 2006 in accordance with generally accepted government auditing standards. In addition to the contact above, Katherine Bittinger, Ridge Bowman, Karen Sloan, Amy Sweet, James Tallon, Tristan To, Hai Tran, and Paul Williams made key contributions to this report. | In 1997, the Department of Defense (DOD) initiated the Joint Tactical Radio System (JTRS) program, a key element of its effort to transform military operations to be network centric. Using emerging software-defined radio technology, the JTRS program plans to develop and procure hundreds of thousands of radios that give warfighters the capability to access maps and other visual data, communicate via voice and video, and obtain information directly from battlefield sensors. The JTRS program has encountered a number of problems, resulting in significant delays and cost increases. The program is currently estimated to total about $37 billion. Given the criticality of JTRS to DOD's force transformation, Congress directed GAO to continue its ongoing review of the JTRS program. This report (1) assesses whether a recent restructuring puts the program in a better position to succeed and (2) identifies any risks that challenge the successful fielding of JTRS. The proposed JTRS restructuring--a plan DOD approved in March 2006--appears to address and reduce program risks that GAO and others have documented in recent years. While still meeting key requirements, including those related to DOD's network centric transformation effort, the revised approach is expected to develop and field capabilities in increments rather than attempting to develop and field the capabilities all at once. Costly and non-transformational requirements will be deferred to later increments. Deferring these requirements will allow more time to mature critical technologies, integrate components, and test the radio system before committing to production. JTRS program management has also been strengthened through the establishment of a Joint Program Executive Office (JPEO). The more centralized management structure should help the program improve oversight and coordination of standards, system engineering, and development of the radios. The real test will be in execution, and, for that, several management and technical challenges remain. First, JPEO must finalize the details of the restructuring, including formal acquisition strategies, independent cost estimates, and test and evaluation plans. DOD also needs to develop migration and fielding plans for how JTRS networking capabilities will be used. Completing and obtaining DOD's approval of these activities is needed to ensure the JTRS program is executable. There are also a number of longer-term technical challenges that the JTRS program must address. For example, the proposed interim solutions for enabling network interoperability among different JTRS variants have yet to be developed. In addition, integrating the radio's hardware onto diverse platforms and meeting respective size, weight, and power limitations has also been a longstanding challenge that must be overcome. Furthermore, operating in a networked environment open to a large number of potential users has generated an unprecedented need for information assurance. This need has resulted in a lengthy, technically challenging, and still evolving certification process from the National Security Agency. At the same time, the program must address the need to obtain and sustain commitments and support from the military services and other stakeholders--a challenge that has often hampered joint development efforts in the past. The extent to which DOD overcomes these challenges will determine the extent to which the program manages cost, schedule, and performance risks and supports JTRS-dependent military operations. |
Aviation safety is a priority goal for FAA. That priority is reflected in the Administration’s budget for fiscal year 2008, which requests $1.9 billion to promote aviation safety and efficiency. To the credit of FAA and the aviation industry, U.S. commercial aviation has had an extraordinary safety record in recent years. In 1997, FAA established a goal to reduce the commercial fatal accident rate by 80 percent in 10 years and for many years the agency has made incremental progress toward that goal. However, increased air traffic, leading to congestion and delays, is straining the efficiency and potentially the safety of the nation’s airspace system. Moreover, while commercial aviation safety trends have been positive over the last several years, FAA did not meet its performance target for commercial aviation accidents last year and does not expect to meet its target for 2007. If air traffic triples as expected over the next two decades and the accident rate of recent years is unchanged, there would be nine fatal commercial aviation accidents each year, on average. To maintain a safe and efficient airspace system, especially if substantial growth in the industry materializes, it will be important for FAA to have well-established, efficient, and effective processes in place to provide an early warning of hazards that can lead to accidents. It will also need a skilled workforce to implement these processes. FAA is moving to a system safety approach to oversight and has established risk-based, data- driven safety programs to oversee the industry and a workforce that includes approximately 4,500 safety inspectors and engineers to implement those programs, about 15,420 air traffic controllers, and nearly 7,200 technicians responsible for maintaining FAA’s air traffic control equipment and facilities. In addition, FAA leverages its inspector and engineer workforce through its “designee” programs, in which about 13,400 private individuals and over 200 organizations have been delegated to act on the agency’s behalf. Our recent work has identified data limitations and human resource challenges facing the agency that affect its ability to implement these programs and oversee aviation safety. FAA’s ability to identify and respond to trends and early warnings of safety problems and to manage risk is limited by incomplete and inaccurate data. While FAA has developed risk-based processes for monitoring and inspecting the aviation industry, in some cases, the implementation of those processes is hampered by the lack of reliable and complete data, which are important for identifying and mitigating safety risks. In other cases, FAA does not fully utilize the data it collects by evaluating or analyzing it for nationwide safety trends. For example, FAA does not collect actual flight activity data for general aviation operators and air taxis. Instead, the agency uses an annual survey to query a sample of registered aircraft owners about the activity of their aircraft during the previous year. The National Transportation Safety Board (NTSB) noted a number of problems with these data, such as historically low response rates, and concluded that FAA’s data do not accurately portray changes in general aviation activity. As a result, FAA lacks information to monitor the rate of general aviation accidents, which decreased from 1,715 in 2002 to about 1,500 in 2006. (See fig. 1.) Therefore, the agency cannot meaningfully evaluate changes in the number of general aviation accidents or determine the effect of its general aviation safety initiatives. NTSB made a number of recommendations to FAA to improve the accuracy of the survey data, such as improving the currency of aircraft owner contact information. As another example, FAA does not collect basic data to measure changes in the air ambulance industry, such as flight hours or number of trips flown. From 1998 through 2005, the air ambulance industry averaged 11 accidents per year, peaking at 18 accidents in 2003. (See fig. 2.) Without data about the number of flights or flight hours, FAA and the air ambulance industry are unable to identify whether the increased number of accidents has resulted in an increased accident rate, or whether it is a reflection of growth in the industry. Data describing the safety trends of the industry are essential to understanding the impact of FAA efforts to improve air ambulance safety. In addition, while FAA receives important data, including self-reporting of safety violations, through its partnership programs with industry, the agency does not evaluate this information for nationwide trends. According to FAA officials, the Aviation Safety Action Program, Aviation Safety Reporting Program, and Voluntary Disclosure Reporting Program allow the agency to be aware of many more safety incidents than are discovered during inspections and surveillance. Although FAA tracks the actions taken to resolve the individual safety violations that it learns about through these programs, it does not evaluate such information in the aggregate to identify trends in violations and their potential cause in order to improve safety. We recommended that FAA develop a continuous evaluative process for its industry partnership programs, and use it to create measurable performance goals for the programs and track performance towards those goals. FAA has not taken these actions, but has begun to address other data issues. FAA recognizes the critical nature of the issues associated with its data. To address its data limitations, FAA is in the early stages of planning the Aviation Safety Information Analysis and Sharing system—a comprehensive new data system that is expected to provide the agency with access to a vast amount of safety data that reside with entities such as NTSB and industry partners including airlines and repair stations. Working with the National Aeronautics and Space Administration (NASA), FAA began planning for the new system in 2006. Because this activity is in the early planning stages, our concerns about FAA’s data remain relevant. The fiscal year 2008 budget for FAA proposes $32 million for safety databases and computer systems. As FAA prioritizes the activities that it undertakes with such funds, it will be important to continue addressing these critical data limitations. Changes to FAA’s oversight programs, such as the planned rapid expansion of the Air Transportation Oversight System (ATOS), from 16 air carriers in 2005 to approximately 115 air carriers by the end of 2007, will pose workload challenges for FAA’s safety inspector workforce of about 3,600. As FAA moves air carriers under the ATOS program, it will also move inspectors to the program. As of January 2007, the 51 air carriers in ATOS were overseen by 829 safety inspectors. Unlike other FAA inspection programs, ATOS inspectors are dedicated to an air carrier and generally cannot be used to inspect other entities. Inspectors who are not part of ATOS, on the other hand, have duties in addition to inspecting air carriers—such as overseeing repair stations, designees, and aviation schools, and investigating accidents. In prior work, we found that about 75 percent of the non-ATOS inspectors had responsibility for more than 3 entities and about half had responsibility for more than 15. In addition, we found that ATOS requires more inspectors per airline than the traditional inspection approach. As inspectors are transitioned to ATOS, the remaining inspectors will have to add those other entities to their workload. With the expansion of ATOS that will continue into fiscal year 2008, it will be important to monitor the magnitude of the shift in resources and the effect it may have on FAA’s overall capability to oversee the industry. Part of the challenge that FAA faces with regard to safety inspectors is improving its process for determining staffing needs. This is especially important as oversight activities and workload shifts with the expansion of ATOS and other program changes, yet FAA lacks staffing standards for safety inspectors. The National Academy of Sciences, under a congressional mandate, recently completed a study for FAA that analyzed FAA’s staffing processes for safety inspectors. The study identified a number of issues that FAA must address when developing a staffing model for safety inspectors. For instance, the study included concerns that the current staffing process does not focus resources in the areas of greatest need and the match between individual inspectors’ technical knowledge and the facilities and operations they oversee is not always optimal. The study recommended a process for FAA to follow to develop a staffing model and identified key factors—such as changes in aircraft and systems, changes in FAA oversight practices including a shift to a system safety approach through programs like ATOS and increasing the use of designees, and new knowledge and skill demands—that should be considered in developing the model. In response to the Academy’s recommendations, FAA expects to develop a staffing model, but the agency does not have a specific timeframe for initiating this effort. With nearly $1 billion of the fiscal year 2008 budget request for FAA covering personnel compensation and benefits for aviation safety and operations, these workload and staffing challenges are critical to address. During the coming decade, FAA will need to hire and train thousands of air traffic controllers to replace those who will retire and leave for other reasons. FAA estimated it will lose 10,291 controllers, or about 70 percent of the controller workforce, during fiscal years 2006 through 2015, primarily due to retirements. To replace these controllers and accommodate increases in air traffic while accounting for expected productivity increases, FAA plans to hire a total of 11,800 new controllers from fiscal year 2006 through 2015. In fiscal year 2006, FAA hired 1,116 controllers. The Administration’s budget for fiscal year 2008 proposes about $4.4 billion for salaries and benefits for the air traffic organization account, which includes FAA’s large air traffic controller workforce. The fiscal year 2008 proposal includes FAA’s plans to hire 1,420 air traffic controllers, which would bring the total number of air traffic controllers to about 15,000. Figure 3 shows the estimated losses each year as well as the number of planned hires. Recent events may exacerbate the hiring situation. Data indicate that controllers are retiring at a faster rate than FAA anticipated. FAA projected 341 retirements for fiscal year 2005; 465 controllers actually retired—36 percent more than FAA’s estimate. Similarly, in fiscal year 2006, 25 percent more controllers retired than FAA projected. To meet its hiring target of 930 controllers in fiscal year 2006, FAA shifted about 200 of its planned hires from fiscal year 2007 to fiscal year 2006 by speeding up the initial screening and training process. According to FAA, it is on track to hire between 1,300 and 1,400 controllers in fiscal year 2007. To keep on track, FAA has recently expanded its hiring sources, which had focused on individuals with prior FAA or Department of Defense (DOD) air traffic control experience and graduates from FAA’s collegiate training initiative program, to include the general public. This strategy is needed, according to FAA officials, because DOD has recently become less of a hiring source for controllers due to military incentives for retaining controllers and higher salaries than FAA’s entry-level salary. It is also important for FAA to ensure that air traffic control facilities have adequate staffing based on their unique traffic demands and the accuracy of FAA’s retirement forecast. Historically, FAA has computed staffing standards, which are the number of controllers needed on a systemwide basis, but distribution of these totals to the facility level was a negotiated process. The staffing standards did not take into account the significant differences in complexity and workload among FAA’s 300 terminal and enroute control facilities, which can lead to staffing imbalances. FAA has begun developing and implementing new staffing standards that use an algorithm that incorporates traffic levels and complexity of traffic at the facility level to determine the number of air traffic controllers needed, according to an FAA official. As FAA further refines its process for determining controller staffing needs, the ultimate objective is to assess the traffic level and complexity on a sector-by-sector basis to develop more accurate controller staffing requirements. This process is in the early stages of implementation and it is too early to assess the outcome. Such staffing standards for air traffic controllers as well as safety inspectors are important to ensure that FAA deploys its resources for fiscal year 2008 and later years in a cost-effective and risk-based manner. FAA has made significant progress in implementing management processes that use leading practices of private sector businesses, but further work remains to fully address past problems. Historically, those problems included chronic cost and schedule difficulties associated with operating and modernizing the nation’s air traffic control system as well as weaknesses in FAA’s financial management. In 1995, we declared FAA’s air traffic control modernization program a high-risk initiative because of its cost, complexity, and systemic management and acquisition problems. In 1999, we also placed FAA on the high-risk list for financial management, noting weaknesses that rendered the agency vulnerable to fraud, waste, and abuse by undermining its ability to manage operations and limiting the reliability of financial information provided to the Congress. FAA has made significant progress in both areas and we removed FAA’s financial management from our high risk list in 2005. However, additional work is needed in managing its acquisitions and finances and is crucial to developing a sustainable capability for delivering priority systems on budget and on time. In addition, FAA, in partnership with other federal agencies, is embarking on the development of NextGen—one of the federal government’s most complex and comprehensive undertakings in recent times. FAA faces challenges associated with moving forward from planning to implementing NextGen. FAA has taken actions to operate in a more business-like manner and enable the agency to more economically and efficiently manage the $14.1 billion requested for its fiscal year 2008 budget. Since we designated FAA financial management as high-risk in 1999, FAA has made significant improvements, including implementing a new financial management system called Delphi and developing a cost accounting system. Additionally, FAA received unqualified opinions from auditors on its annual financial statements for fiscal years 2001 through 2005, in spite of material internal control weaknesses that the auditors identified. This progress led us to remove FAA financial management from our high risk list in 2005. Nonetheless, external auditors issued a qualified opinion on FAA’s fiscal year 2006 financial statements for the first time since 2000 and repeated a material internal control weakness that was reported in 2005. The opinion and internal control report stemmed from FAA’s inability to support the accuracy and completeness of the construction-in-progress account, reported in the financial statements as $4.7 billion. Difficulties with this account, which includes costs for projects such as radars, runway guidance systems, and aviation safety and security systems, have been a longstanding concern. FAA has begun work to address this problem. However, it will be important for FAA to develop a systematic solution to this problem, so that it does not recur. FAA’s efforts towards improved financial management also include establishing a cost control and cost reduction program. According to agency officials, each line of business—such as FAA’s Air Traffic Organization (ATO), which is responsible for managing and modernizing the air traffic control system—is annually required to propose at least one cost control initiative, and the Administrator tracks and reviews progress on these initiatives monthly. According to FAA, these initiatives have yielded a total of $99.1 million in cost savings and $81.9 million in cost avoidance for fiscal years 2005 and 2006. Additional cost control efforts include outsourcing flight service stations, which FAA estimates will save $2.2 billion over 10 years, and restructuring its administrative service areas from 9 separate offices to 3, which FAA estimates will save up to $460 million over 10 years. We have ongoing work that is assessing FAA’s cost control strategy and identifying additional cost savings opportunities that may exist. For example, we have previously reported the need for FAA to pursue further cost control options, such as exploring additional opportunities for consolidating facilities and contracting out more of its services. FAA has taken steps to improve its software acquisition and investment management processes and for the last 3 years has reported meeting its cost and schedule targets for the acquisition of major systems, including air traffic control systems. These improvements are particularly important since FAA plans to spend about $9.4 billion from fiscal year 2007 through fiscal year 2011 to upgrade and replace air traffic control systems. To better manage its information technology investments, including its software intensive air traffic control systems, and address problems we have identified, FAA has changed its acquisition management guidance to require review of all investments—new systems as well as systems in service. In addition, FAA has established a cost estimating methodology for its investments. FAA has also developed and applied a process improvement model to assess the maturity of its software and systems capabilities resulting in, among other things, enhanced productivity and greater ability to predict schedules and resources. Further, FAA has made progress in expanding its enterprise architecture—a comprehensive guide to its plans for acquiring new systems—to include the initial requirements for NextGen. However, making further improvements and institutionalizing them throughout the agency will continue to be a challenge for FAA. For example, FAA’s acquisition management guidance does not clearly indicate whether the reviews of in-service systems include reevaluations of projects’ alignment with strategic goals and objectives, as we recommended. In addition, the agency has yet to implement its cost estimating methodology. Furthermore, FAA has not established a policy to require use of its process improvement model on all major acquisitions for the national air space system. Additionally, as FAA begins to detail the scope and system requirements of NextGen, it will be important to adapt and expand the enterprise architecture for the national air space system to guide these future plans. Until the agency fully addresses these residual issues, it will continue to risk program management problems affecting cost, schedule, and performance. With a multi-billion dollar acquisition budget, addressing these actions are as critical as ever. Institutionalizing these financial, acquisition, and information technology improvements will be a challenge for FAA, especially in view of the imminent departure of the Chief Operating Officer later this month and the departure of the Administrator, who will reach the end of her 5-year term this September. We have reported that the experiences of successful transformations and change management initiatives in large public and private organizations suggest that it can take 5 to 7 years or more until such initiatives are fully implemented and cultures are transformed in a sustainable manner. Such changes require focused, full-time attention from senior leadership and a dedicated team. Work to determine the capabilities and requirements that will be needed for NextGen and to produce a comprehensive vision for that system is nearing completion; however, given the staggering complexity of this ambitious effort to modernize and transform the air traffic control system over the next two decades, it will not be easy to move from planning to implementation. To plan NextGen, Congress authorized the creation of the Joint Planning and Development Office (JPDO) in 2003. JDPO is housed within FAA and the Administration’s fiscal year 2008 budget includes $14.3 million to support JPDO. To carry out its planning function, JPDO is required to operate in conjunction with multiple government agencies. JPDO’s approach requires unprecedented collaboration and consensus among many stakeholders—federal and nonfederal—about necessary system capabilities, equipment, procedures, and regulations. Recently, JPDO has made progress in developing key planning documents, including a cost estimate for NextGen. However, as efforts move forward to implement NextGen, it will be important to identify the source and funding for completion of intermediate technology development and determine how FAA can best manage the complex implementation and integration of NextGen technologies. Without a timely transition to NextGen capabilities, JPDO officials estimate a future gap between the demand for air transportation and available capacity that could cost the U.S. economy billions of dollars annually. FAA and the other JPDO partners have been working to refine the vision for NextGen and achieve a general consensus on that vision. The bulk of JPDO’s planning has been to develop three critical documents—a concept of operations, enterprise architecture, and operational improvement roadmaps. Once these key documents are completed in the next few months, it will be important to synchronize them with partner agency planning documents, including FAA’s implementation plan for NextGen— the Operational Evolution Partnership (OEP)—and to continue to use the documents to drive agency budget decisions. The OEP is intended as a comprehensive description of how the agency will implement NextGen, including the required technologies, procedures, and resources. JPDO is continuing to work with the Office of Management and Budget (OMB) to develop a unified, cross-agency program for NextGen funding requests. Given the criticality of NextGen, another important planning document— possibly the most important for Congress—is a comprehensive estimate of the costs to JPDO partner agencies, particularly FAA, for the required research, development, systems acquisitions, and systems integration. Such an estimate does not yet exist. As we reported in November 2006, a limited, preliminary cost estimate concluded that FAA’s budget under a NextGen scenario would average about $15 billion per year through 2025, or about $1 billion more annually (in today’s dollars) than FAA’s fiscal year 2006 appropriation. A JPDO official told us they have submitted a limited NextGen cost estimate to OMB with the 2008 budget request. As of February 9, 2007, JPDO had not publicly released its cost estimate for NextGen. According to the Department of Transportation, the Administration’s budget for fiscal year 2008 includes $175 million to support key FAA investments in NextGen. According to JPDO officials, their current estimate focuses only on the near-term capital needs for FAA’s ATO portfolio. To develop what they believed would be a more accurate cost estimate, JPDO also focused on the funding necessary to achieve only the capabilities of the NextGen system around 2016, rather than the long-term 2025 capabilities. JPDO then laid out the major systems and investments required by ATO to achieve the mid-term vision and the related costs for ATO. While JPDO’s new estimate will be a step toward understanding the costs of NextGen, this estimate is still incomplete. Much work remains to develop a comprehensive cost estimate for NextGen that includes the costs to the rest of FAA (beyond ATO), the other JPDO partner agencies, and industry. A JPDO official told us the agency is working to develop a comprehensive estimate and plans to have one ready to submit with the 2009 budget request. This comprehensive estimate is intended to describe the business case for NextGen and detail the investments that will be required by all the JPDO partner agencies to achieve the NextGen vision by 2025. The successful implementation of NextGen will depend, in part, on resolving the uncertainty over which entities will fund and conduct the research and development necessary to achieve some key NextGen capabilities and to support the operational roadmaps. In the past, a significant portion of aeronautics research and development, including intermediate technology development, has been performed by NASA. However, our analysis of NASA’s aeronautics research budget and proposed funding shows a 30 percent decline, in constant 2005 dollars, from fiscal year 2005 to fiscal year 2011. To its credit, NASA plans to focus its research on the needs of NextGen. However, NASA is also moving toward a focus on fundamental research and away from developmental work and demonstration projects. FAA has determined that research gaps now exist as a result of both NASA’s cuts to aeronautical research funding and the expanded requirements for NextGen coming from JPDO. These gaps are in the activities of applied research and development—activities that will be required to implement new policies, demonstrate new capabilities, set parameters for certification of new systems, and develop technologies for transfer to industry. It will be important for both FAA and JPDO to find ways, in the near term, to keep the necessary research and development on track to support implementation of NextGen by 2025. In 2006, officials from FAA and JPDO initiated an assessment of NextGen research and development requirements. Their goal was to identify specific research initiatives that were not currently funded, but which they said must be initiated no later than fiscal year 2009 to comply with the operational roadmaps. The preliminary findings from this assessment led to increased budget requests for FAA to help lessen the research and development gaps. However, JPDO officials noted that a research and development gap remains, with items in the research and development pipeline that need funding to take them from concept to development. Other options for addressing the gap are for JPDO and FAA to further explore ways to leverage the research being conducted in other agencies or to partner with industry or academia. For example, JPDO and FAA have already identified research within DOD on alternative fuels that, with a modest investment, could be leveraged to include civil aviation. Currently, it is unknown how all of the significant research and development activities inherent in the transition to NextGen will be conducted or funded. Another issue with regard to NextGen implementation will be FAA’s ability to manage the systems acquisitions and integration needed to implement a system as broad and complex as NextGen. In the past, a lack of expertise contributed to weaknesses in FAA’s management of air traffic control modernization efforts. Industry experts with whom we have spoken continue to question whether FAA will have the technical expertise needed to implement NextGen. In November, we recommended that FAA examine its strengths and weaknesses with regard to the technical expertise and contract management expertise that will be required to define, implement, and integrate the numerous complex programs inherent in the transition to NextGen. In response to our recommendation, FAA is considering convening a blue ribbon panel to study this issue and make recommendations to the agency about how to best proceed with its management and oversight of the implementation of NextGen. We believe that such a panel could help FAA begin to address this challenge. As it modernizes the national airspace system to meet the nation’s future air transportation needs, FAA must not only transform the air traffic control system, but also work with airport operators to provide increased capacity at airports to safely handle the projected growth in the demand for air travel. This latter responsibility will include overseeing airports’ efforts to adapt their infrastructure to accommodate the introduction of very light jets, and in the case of the largest airports, the new large Airbus A380. Airports are an integral part of the nation’s transportation system and maintaining their safety and efficiency is an important FAA responsibility. To this end, FAA administers the Airport Improvement Program (AIP), which provides federal funds for development projects at the entire range of the nation’s 3,400 airports—from small general aviation airports to the very largest that handle several million passengers per year. The Administration has proposed cuts in AIP funding and is considering possible changes to the AIP allocation formula as well as increasing the cap on passenger facility charges for airport development projects. Any change in the level or allocation of these funds could have implications for funding airport capital projects. Not only AIP grants but also portions of other FAA programs receive funds from the Airport and Airway Trust Fund, which is largely financed by excise taxes on ticket purchases by airline passengers and aviation fuel. Since these taxes are scheduled to expire at the end of September 2007, ensuring that there is no lapse in revenue to the trust fund will require Congressional action. Without a continued flow of funds to the trust fund, FAA’s ability to carry out AIP and other programs during fiscal year 2008 may be in jeopardy. FAA estimates the total cost for planned airport projects that are eligible for AIP funding, including runways, taxiways, and noise mitigation and reduction efforts, will be about $42 billion for fiscal years 2007 through 2011. This estimate is little changed from the agency’s last estimate in 2004 for the period 2005 to 2009. FAA’s current estimate indicates that over half of the planned development will occur at large and medium hub airports. The Airports Council International—North America (ACI-NA) also provides estimates of planned airport development. ACI-NA includes both AIP-eligible projects and ineligible projects and, as a result, has higher estimates. Historically, airports have received funding for capital development from a variety of sources. As we reported in 2003, the single largest source of financing for airports is tax-exempt bonds, followed by AIP grants and passenger facility charges. Tax exempt bonds are currently supported by airport revenue and, in some cases, by passenger facility charges. Access to these funding sources varies according to airports’ size and funding capabilities. Large and medium hub airports depend primarily on tax- exempt bonds, while the smaller airports rely principally on AIP grants. Passenger facility charges are a particularly important source of capital for large and medium hub airports because they have the majority of commercial service passengers. The Administration has proposed changing the federal role in financing airport development in its fiscal year 2008 budget proposal, which also includes a reauthorization proposal for FAA that will be submitted later this month. Funding for AIP grants would be reduced and the allocation formula changed. The Administration’s reauthorization proposal is expected to provide details on these proposed changes. It is, therefore, currently unclear how a number of issues will be addressed. The reauthorization proposal may clarify the impact on smaller airports, which received about two-thirds of AIP grants in fiscal year 2004. As noted earlier in my statement, smaller airports rely primarily on AIP grants for capital funding. In recent years, statutory changes in the distribution of AIP grants have increased the share to smaller airports. However, under the fiscal year 2008 budget proposal, funding changes would especially impact smaller airports if the current allocation formulas are unchanged in the forthcoming reauthorization proposal. First, primary airport entitlements under AIP would be cut in half from the fiscal year 2006 level. In turn, the small airport fund, which is funded from AIP entitlement amounts that large and medium hub airports must turn back if they impose passenger facility charges, would also be reduced by half. Second, state entitlements for non-primary commercial service and general aviation airports would be reduced from 20 percent to 18.5 percent of total AIP obligations. Finally, discretionary set aside grants for reliever airports would be eliminated under the fiscal year 2008 budget proposal. Table 1 shows the effect on the amounts available for various types of AIP grants at different funding levels including the $2.75 billion requested in the Administration’s budget and the actual funding level for fiscal year 2006. To help offset any reductions in AIP grants, FAA is also considering allowing airports to collect more revenue from passenger facility charges, which large airports generally prefer. Airlines, however, have been generally opposed to an increase in these charges because they have little control in how passenger facility charges are spent and because they believe these charges reduce passenger demand for air travel. Nonetheless, if airports were to increase charges, additional airport revenue could be generated. Increasing the cap on passenger facilities charges would primarily benefit larger airports because these charges are a function of passenger traffic. However, as already noted, under AIP, large airports that collect passenger facility charges must forfeit a certain percentage of their AIP formula funds. These forfeited funds are subsequently divided between the small airport fund, which is to receive 87.5 percent, and the discretionary fund, which is to receive 12.5 percent. Thus, under current law, smaller airports would benefit indirectly from any increases in passenger facility charges and help offset reductions in AIP funding. With the excise taxes that fund the Airport and Airway Trust Fund scheduled to expire at the end of fiscal year 2007, Congress will need to act if there is to be no lapse in revenue to the trust fund to fund FAA. If the taxes are neither reauthorized by that time nor replaced by other revenue sources for the trust fund, the only revenues to the trust fund will be interest earned on the fund’s cash balance. FAA estimates that two previous lapses in 1996-1997 resulted in the trust fund not receiving about $5 billion in revenue. As of the end of fiscal year 2006, the trust fund’s uncommitted balance— surplus revenues in the trust fund against which no commitments, in the form of budget authority, have been made—was less than $2 billion. The Administration’s budget proposal projects that the uncommitted balance will be about $2 billion at the end of fiscal year 2007. If today’s level of monthly tax revenue continues, a 2- to 3-month lapse in fiscal year 2008 could reduce the revenue to the trust fund enough to cause the uncommitted balance to fall to zero in fiscal year 2008. Most of FAA’s funding comes from the trust fund—the fiscal year 2008 budget request for FAA proposes about 80 percent of the agency’s funding from the trust fund with the remainder from the general fund. If the trust fund balance falls to zero, continuation of FAA’s programs—including efforts to address some of the safety and management challenges that I have discussed—would depend on providing additional general revenues. For further information on this testimony, please contact Dr. Gerald L. Dillingham at (202) 512-2834 or dillinghamg@gao.gov. Individuals making key contributions to this testimony include Paul Aussendorf, Jay Cherlow, Jessica Evans, Colin Fallon, Carol Henn, Ed Laughlin, Ed Menoche, Faye Morrison, Colleen Phillips, Taylor Reeves, Richard Scott, Teresa Spisak, and Larry Thomas. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | FAA operates one of the safest air transportation systems in the world. It is, however, a system under strain. The skies over America are becoming more crowded every day. FAA faces the daunting task of safely integrating a growing influx of passengers and aircraft into the system and simultaneously leading the transition to the Next Generation Air Transportation System (NextGen)--a complicated effort to modernize the system. FAA's broad responsibilities to maintain and modernize the nation's air transportation system must be met in an uncertain budgetary and long-term fiscal environment. GAO's concerns about financing the nation's transportation system, including aviation, led GAO to designate this issue as high-risk. To ensure continued safety within the national airspace system, FAA is using risk-based, data-driven safety programs to oversee the industry; however, the agency faces data and human resource challenges that affect its ability to fully implement these programs. GAO has previously recommended that FAA improve the accuracy and completeness of the safety data and analysis of that data needed to monitor safety trends, fully implement its safety programs, and assess their effectiveness to determine if they are focused on the greatest safety risk. FAA has made progress in this area but more remains to be done. FAA's ability to oversee the aviation industry will be further affected by its ability to hire, train, and deploy its primary workforce of safety inspectors, engineers, and air traffic controllers. The expansion of FAA's oversight program for air carriers will result in workload shifts for its inspectors that will make it important for FAA to improve its staffing process. In addition, the agency estimates that it will lose about 70 percent of the air traffic controller workforce over the next 10 years, primarily due to retirements. FAA has made significant progress in implementing management processes and systems that use leading practices of private sector businesses; however, further work remains to institutionalize these efforts. For example, new and improved acquisition processes and oversight have contributed to FAA meeting its acquisition cost and schedule goals for the last three years. Additional work remains, though--FAA received a qualified opinion on its most recent financial audit as a result of lack of support for the accuracy of about $4.7 billion for equipment. Moreover, GAO has previously recommended that FAA should undertake additional efforts to consolidate its facilities and outsource some of its services to further cut costs. Some key challenges for the transition to NextGen include completing the design and cost estimates for NextGen and proposing how that cost will be funded. FAA will also need to assess its capacity to handle the technical and contract management expertise that will be required to oversee the implementation of NextGen. FAA estimates that the total cost for planned airport development that is eligible for funding from the Airport Improvement Program (AIP) will be about $42 billion for 2007 through 2011. FAA's budget request for fiscal year 2008 proposes significant cuts in AIP. These cuts, along with changes to the way AIP is allocated among airports and possible increases in the cap on passenger ticket charges for airport projects, could have implications for the amount of funding available for planned airport development, especially at small airports. Additionally, the taxes that fund the Airport and Airway Trust Fund are scheduled to expire at the end of fiscal year 2007. Until Congress reauthorizes those taxes, FAA's ability to carry out programs related to airport development as well as some other programs throughout the agency may be in jeopardy, compounding the safety and management challenges facing FAA. |
Through a number of legislative actions, Congress has indicated its desire that agencies create telework programs to accomplish a number of positive outcomes. These actions have included recognizing the need for program leadership within the agencies; encouraging agencies to think broadly in setting eligibility requirements; requiring that employees be allowed, if eligible, to participate in telework, and requiring tracking and reporting of program results. Some legislative actions have provided for funding to assist agencies in implementing programs, while other appropriations acts withheld appropriated funds until the covered agencies certified that telecommuting opportunities were made available to 100 percent of each agency’s eligible workforce. The Telework Enhancement Act of 2007, S. 1000, continues the efforts of Congress to achieve greater participation. The most significant congressional action related to telework was the enactment of Section 359 of Public Law No. 106-346 in October 2000, which provides the current mandate for telework in the executive branch of the federal government by requiring each executive agency to establish a policy under which eligible employees may participate in telework to the maximum extent possible without diminishing employee performance. The conference report language further explained that an eligible employee is any satisfactorily performing employee of the agency whose job may typically be performed at least one day per week by teleworking. In addition, the conference report required the Office of Personnel Management (OPM) to evaluate the effectiveness of the program and report to Congress. The legislative framework has provided both the General Services Administration (GSA) and OPM with lead roles for the governmentwide telework initiative, to provide services and resources to support and encourage telework, including providing guidance to agencies in developing their program procedures. In addition, Congress required certain agencies to designate telework coordinators to be responsible for overseeing the implementation of telework programs and serve as points of contact on such programs for the Committees on Appropriations. GSA and OPM provide services and resources to support the governmentwide telework implementation. OPM publishes telework guidance, which it recently updated, and works with the agency telework coordinators to guide implementation of the programs and annually report the results achieved. GSA offers a variety of services to support telework, including developing policy concerning alternative workplaces, managing the federal telework centers, maintaining the mail list server for telework coordinators, and offering technical support, consultation, research, and development to its customers. Jointly, OPM and GSA manage the federal Web site for telework, which was designed to provide information and guidance. The site provides access for employees, managers, and telework coordinators to a range of information related to telework, including announcements, guides, laws, and available training. Although agency telework policies meet common requirements and often share some common characteristics, each agency is responsible for developing its own policy to fit its mission and culture. According to OPM, most agencies have specified occupations that are eligible for telework and most apply employee performance-related criteria in considering authorizing telework participation. In addition, OPM guidance states that eligible employees should sign an employee telework agreement and be approved to participate by their managers. The particular considerations in regard to these requirements and procedures will differ among agencies. In our 2003 study of telework in the federal government, we identified 25 key practices that federal agencies should implement in developing their telework programs. A full list of the key practices appears in appendix I. Among those were several practices closely aligned with managing for program results, including: developing a business case for implementing a telework program; establishing measurable telework program goals; establishing processes, procedures, and/or a tracking system to collect data to evaluate the telework program; and identifying problems and/or issues with the telework program and making appropriate adjustments. Yet, in our assessment of the extent to which four agencies—the Department of Education, GSA, OPM, and the Department of Veterans Affairs—followed the 25 key practices, we found these four practices to be among the least employed. In discussing the business case key practice in our 2003 study, we cited the International Telework Association and Council, which had stated that successful and supported telework programs exist in organizations that understand why telework is important to them and what specific advantages can be gained through implementation of a telework program. A business case analysis of telework can ensure that an agency’s telework program is closely aligned with its own strategic objectives and goals. Such an approach can be effective in engaging management on the benefits of telework to the organization. For example, making a case for telework as a part of an agency’s Continuity of Operations (COOP) plan can help organizations understand why they support telework, address relevant issues, minimize business risk, and make the investment when it supports their objectives. Through business case analysis, organizations have been able to identify cost reductions in the telework office environment that offset additional costs incurred in implementing telework and the most attractive approach to telework implementation. None of the four agencies we reviewed, however, had effectively implemented this practice. Moreover, none of the four agencies had established measurable telework program goals. As we noted in our report, OPM’s May 2003 telework guide discussed the importance of establishing program goals and objectives for telework that could be used in conducting program evaluations for telework in such areas as productivity, operating costs, employee morale, recruitment, and retention. However, even where measurement data are collected, they are incomplete or inconsistent among agencies, making comparisons meaningless. For example, in our 2005 report of telework programs in five agencies—the Departments of State, Justice, and Commerce; the Small Business Administration; and the Securities and Exchange Commission—measuring eligibility was problematic. Three of the agencies excluded employees in certain types of positions (e.g., those having positions where they handle classified information) when counting and reporting the number of eligible employees, while two of the agencies included all employees in any type of position when counting and reporting the number of eligible employees, even those otherwise precluded from participating. With regard to the third key practice aligned with managing for results— establishing processes, procedures and/or a tracking system to collect data to evaluate the telework program—in our 2003 review we found that none of the four agencies studied were doing a survey specifically related to telework or had a tracking system that provided accurate participation rates and other information about teleworkers and the program. At that time, we observed that lack of such information not only impeded the agencies in identifying problems or issues related to their programs but also prevented them from providing OPM and Congress with complete and accurate data. Also, in our 2005 study at five agencies, we found that four of the five agencies measured participation in telework on the basis of their potential to telework rather than their actual usage. The fifth agency reported the number of participants based on a survey of supervisors who were expected to track teleworkers. According to OPM, most agencies report participation based on telework agreements, which can include both those for employees teleworking on a continuing basis as well as those for episodic telework. None of the five agencies we looked at had the capability to track who was actually teleworking or how frequently, despite the fact that the fiscal year 2005 consolidated appropriations act covering those agencies required each of them to provide a quarterly report to Congress on the status of its telework program, including the number of federal employees participating in its program. At that time, two of the five agencies said they were in the process of implementing time and attendance systems that could track telework participation, but had not yet fully implemented them. The other three agencies said that they did not have time and attendance systems with the capacity to track telework. “The conferees are troubled that many of the agencies’ telework programs do not even have a standardized manner in which to report participation. The conferees expect each of these agencies to implement time and attendance systems that will allow more accurate reporting.” Despite this language, officials at four of the five agencies said that they have not yet developed such systems and are still measuring participation as they did in 2005. For the fifth agency—the Department of Justice (DOJ)—an official told us that the department has now implemented a Web-based time and attendance system in most bureaus and that this system allows DOJ to track actual telework participation in those bureaus. According to this official, the Federal Bureau of Investigation is the major exception, but DOJ is working towards having all bureaus use this system. As for the fourth key practice closely related to managing for program results—identifying problems and/or issues with the telework program and making appropriate adjustments—none of the four agencies we reviewed for our 2003 study had fully implemented this practice and one of the four had taken no steps to do so despite the importance of using data to evaluate and improve telework programs. An OPM official told us, for example, that she did not use the telework data she collected to identify issues with the program; instead, she relied on employees to bring problems to her attention. To help agencies better manage for results through telework programs, in our 2005 study, we said that Congress should determine ways to promote more consistent definitions and measures related to telework. In particular, we suggested that Congress might want to have OPM, working through the Chief Human Capital Officers Council, develop a set of terms, definitions, and measures that would allow for a more meaningful assessment of progress in agency telework programs. Some information could be improved by more consistent definitions, such as eligibility. Some information may take additional effort to collect, for example, on actual usage of telework. Some of this information may already be available through existing sources. The Federal Human Capital Survey, for example, which is administered biennially, asks federal employees about their satisfaction with telework. In the latest survey, only 22 percent indicated they were satisfied or very satisfied, while 44 percent indicated they had no basis to judge—certainly there seems to be room for improvement there. In any case, OPM and the Chief Human Capital Officers Council are well-situated to sort through these issues and consider what information would be most useful. The council and OPM could also work together on strategies for agencies to use the information for program improvements, including benchmarking. S. 1000 is intended to enhance the existing legislative framework and provides that all employees of executive agencies are eligible for telework except in some circumstances related to an employee’s duties and functions. In addition, the bill addresses the coverage of employees in the legislative and judicial branches and provides that within 1 year from the date of enactment, policies shall be established to allow such employees, unless otherwise excluded, to participate in telework to the maximum extent possible without diminishing employee performance or legislative or judicial branch operations. The bill further recognizes the importance of leadership in promoting an agency’s telework program by requiring the appointment of a senior-level management official to perform several functions to promote and enhance telework opportunities. We have several observations to offer on the bill. As we have discussed with your staff, we have specific concerns about section 5 of the bill, which would require GAO to establish and implement a rating system for agency compliance with and participation in telework initiatives and report the results. For executive branch agencies, we believe this function is more appropriately placed with OPM. A GAO rating system that does not have the benefit of a full GAO evaluation of the underlying information would raise concerns that our independence is compromised if we were asked at a future time to evaluate telework programs in the federal government. Accordingly, we have provided Committee staff with substitute language that would place these rating and report functions in OPM, the agency that is currently responsible for reporting on most telework activities and participation in the executive branch. Our substitute language would have the Comptroller General instead provide his views on the OPM report to the Senate Committee on Homeland Security and Governmental Affairs and the House Committee on Oversight and Government Reform within 6 months of the report. We would also like to bring several other issues to your attention. The bill would extend coverage of these telework initiatives to the legislative and judicial branches. We suggest substituting a reference to “the head of each legislative branch entity” in sections 2(c)(3) and 4(a) of the bill so that the heads of the Library of Congress, the Government Printing Office, and GAO, for example, would be responsible for developing agency policies on telework, determining which employees are eligible for telework, and designating senior-level employees to serve as telework managing officers. This approach would be consistent with the coverage of the executive branch under the bill where the head of each agency would perform similar functions. With regard to the bill’s requirement to appoint a telework managing officer in each executive branch agency, it is not clear how that employee’s duties would relate to the duties of the agency officials currently designated as telework coordinators pursuant to the provisions of section 627 of Public Law No. 108-199. Another provision of the bill would define telework as occurring on at least 2 business days per week, leaving unclear how this would relate to the broader definitions of telework currently defined in existing legislation and OPM guidance, which includes episodic or occasional instances. It is also unclear whether the bill intends to allow agencies to consider employee performance in making telework eligibility decisions. Current legislation and agency practice requires employees to be performing satisfactorily. The bill also provides for “productivity awards” for teleworking employees, but it is not clear whether nonteleworking employees would also be eligible to receive productivity awards and would be evaluated on the same performance standards. We would note that one of the key practices identified in our 2003 report was ensuring that the same performance standards are used to evaluate both teleworkers and nonteleworkers. The perception that care had not been taken to establish fair and equitable eligibility criteria could present performance and morale issues. Finally, the bill includes among the duties of the telework managing officer assisting the head of the agency in designating employees to telework in order to continue agency operations in the event of a major disaster as defined under the Stafford Act, 42 U.S.C. § 5122. We would note, however, that telework can be effective in a variety of emergency conditions not limited to those emergencies defined under the Stafford Act. For example, we reported that GAO’s telework capability was significant to assisting the House of Representatives and minimizing the disruption to its own operations when anthrax bacteria were released on Capitol Hill in 2001. In conclusion, telework is a key strategy to accomplish a variety of federal purposes. Telework is an investment in both an organization’s people and the agency’s capacity to perform its mission. We continue to believe that OPM and the Chief Human Capital Officers Council are well-positioned to help agencies better manage for results through telework. Mr. Chairman and members of the subcommittee, this completes my statement. I would be pleased to respond to any questions that you might have. For further information on this testimony, please contact Bernice Steinhardt, Director, Strategic Issues, (202) 512-6806 or at steinhardtb@gao.gov. Individuals making key contributions to this testimony include William J. Doherty, Joyce D. Corry, Allen Lomax, and Michael Volpe. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Telework continues to receive attention within Congress and federal agencies as a human capital strategy that offers various flexibilities to both employers and employees, including the capacity to continue operations during emergency events, as well as benefits to society, such as decreased energy use and pollution. This statement highlights some of GAO's prior work on federal telework programs, including key practices for successful implementation of telework initiatives, identified in a 2003 GAO report and a 2005 GAO analysis of telework program definitions and methods in five federal agencies. In addition, the statement discusses GAO observations on the Telework Enhancement Act of 2007, S. 1000. Through a number of legislative actions, Congress has indicated its desire that agencies create telework programs to accomplish a number of positive outcomes. Many of the current federal programs were developed in response to a 2000 law that required each executive branch agency to establish a telework policy under which eligible employees may participate in telecommuting to the maximum extent possible without diminishing employee performance. The legislative framework has provided the OPM and the General Services Administration (GSA) with lead roles for the governmentwide telework initiative, providing services and resources to support and encourage telework. Although agency telework policies meet common requirements and often share characteristics, each agency is responsible for developing its own policy to fit its mission and culture. In a 2003 report, GAO identified a number of key practices that federal agencies should implement in developing their telework programs. Four of these were closely aligned with managing for program results: (1) developing a business case for telework, (2) establishing measurable telework program goals, (3) establishing systems to collect data for telework program evaluation, and (4) identifying problems and making appropriate adjustments. None of the four agencies we reviewed, however, had effectively implemented any of these practices. In a related review of five other agencies in 2005, GAO reported that none of the agencies had the capability to track who was actually teleworking or how frequently, relying mostly on the number of telework agreements as the measure of program participation. S. 1000 is intended to enhance the existing legislative framework and provides that all employees of the executive, judicial, and legislative branches are eligible for telework except in some circumstances related to an employee's duties and functions. The bill also recognizes the importance of leadership in promoting an agency's telework program by requiring the appointment of a senior-level management official to perform several functions to promote and enhance telework opportunities. GAO's statement suggests changes to the assignment of responsibilities for rating and reporting along with changes to make the responsibilities for heads of agency and entities in the legislative and judicial branches more consistent with those of executive branch officials. The statement also points out several provisions of S. 1000 that are not clear in relation to existing legislation. |
Annually, the Forest Service receives appropriations to operate its nationwide programs. On the basis of these appropriations, the Forest Service allocates a portion to each of its regions to carry out the regional and field office programs. In the case of the Alaska Region, appropriations are further allocated to (1) the regional office, which provides overall direction and support for programs and activities in the region as well as funds for the State and Private Forestry operations located in Anchorage, Alaska; (2) the centralized field costs, which fund programs or activities that usually have regionwide benefits; (3) the four field offices to operate “on the ground” programs; and (4) reserve accounts from which distributions are made during the year to the field offices. As shown on table 1, the Alaska Region’s operating costs ranged from $108 million to $127 million annually during fiscal years 1993 through 1997 and were estimated to be about $106 million for fiscal year 1998. Until fiscal year 1998, the Alaska Region used a category of operating costs, known as centralized field costs, as a means to improve efficiency by having one office—either the regional office or one of the field units—manage certain programs or activities for the benefit of multiple offices. Centralized field costs include activities such as payments to the National Finance Center for payroll and accounting services. Overall, the centralized field costs established by the region increased from about $5 million in fiscal year 1993 to almost $9 million in fiscal year 1997, and the number of programs or activities included in these costs fluctuated from 24 to 41 during the same period. However, this overall increase is not reflective of the increases or decreases in individual centralized field costs during this period because the same programs or activities were not funded each year nor did the amounts of individual centralized field costs remain constant. As a component of the Alaska Region’s overall operating budget, these costs averaged about 5 percent of the total. Regional office budget officials viewed the use of these centralized field costs as a means to better achieve efficiency because the costs of certain programs or activities generally would be managed centrally rather than allocating each unit’s share of the cost and then requiring each unit to pay its proportional amount. Field office officials cited both the advantages and disadvantages of using centralized field costs. Yet none of these field office officials could provide us with specific examples of disadvantages that negatively affected their operations or what more they could have accomplished if centralized field costs had not existed. In the conference report for the Forest Service’s fiscal year 1998 appropriations, the conferees expressed concern “about the appearance that expenditures for regional office operations and centralized field costs have risen significantly as a proportion of annual appropriated funds since 1993.” As a result, in the appropriations act the Congress limited the Alaska Regional Office’s expenditures for the regional office’s operations and centralized field costs to $17.5 million, without 60 days prior notice to the Congress. The preliminary budget allocation for fiscal year 1998 regional office operations and centralized field costs totaled about $26.5 million. According to a regional budget official, the region is currently implementing the following measures to meet the congressional limitation: Eliminating all existing centralized field costs by allocating the funds directly to the field units whenever the office and amounts are known. Placing unallocated funds into a reserve account and distributing them as decisions are reached as to which office will receive the money. Separating the costs associated with the State and Private Forestry organizational unit from the regional office’s expenses. According to the regional budget official, the region eliminated centralized field costs and was able to reduce the planned regional office cost allocations to about $18.7 million as of March 4, 1998. Although this estimate exceeds the $17.5 million congressional limitation, according to an Alaska Region budget official, further adjustments will be made as the year progresses to ensure that regional office operating expenses do not exceed the amount allowed by the Congress. He also stated that centralized field costs will not be used in the future. The Alaska Region establishes reserves because of the uncertainty about the timing or the amount of funds needed for certain projects. Once the specific amount or responsible unit is determined, the region distributes the necessary reserves to the unit responsible for making the payment. In fiscal years 1995 through 1997, the Alaska Region distributed reserves ranging from $6 million to $12 million. The four field offices received from 87 to 98 percent of the reserves during this period, and the remainder went to the region for regional office operations. Any ending balance in the reserve category becomes the carryover amount for the next fiscal year. To determine whether reserves play a positive or negative role in effectively implementing programs, we spoke with officials of each of the four field offices. The officials agreed that establishing a reserve amount to facilitate the accounting for unknowns was an effective procedure and believed that the region’s actions in this case generally led to less paperwork for the local units. In most cases, the field offices viewed reserves as a reasonable approach to addressing the uncertainties related to contracting, such as delays, cost increases, or the lack of appropriate bids. Thus, overall, the field office officials generally supported the process of establishing reserves and the manner in which the regional office approached the distribution of these funds. Beginning in fiscal year 1995, the Forest Service’s Pacific Research Station scientists performed work in connection with the Tongass Land Management Plan. The work of the Research Station scientists was jointly funded: Part of the expenses was funded from the Alaska Region’s portion of the National Forest System appropriation, which is normally used for forest planning activities, and another part was funded by the Research Station’s portion of the Research appropriation, which is used for research activities. The work performed by the Research Station scientists dealt with (1) the revision of the Tongass Land Management Plan, including resource conservation assessments, resource analyses, workshops, and risk assessment panels and (2) the post-plan priority research studies identified in the plan as important for further amendments or revisions to the plan. Although we asked for documentation of the rationale for decisions about the funding split for the particular work performed by the research scientists, neither of these organizations could provide us with adequate explanations or documentation. According to the Forest Service’s records, for fiscal years 1995 through 1998 the work of the scientists will have cost about $4.7 million, of which $2.8 million was funded by the National Forest System appropriation and $1.9 million was funded by the Research appropriation. Our analysis of these data showed that the Research Station scientists used 60 percent of the funds for the revision of the plan and 40 percent for post-plan studies. According to an Intra-Agency Agreement, the Alaska Region and the Research Station plan to continue funding post-plan studies at about $1.35 million annually in future years with $900,000 and $450,000 from the National Forest System and Research appropriations, respectively. The Congress provided the National Forest System appropriation for the management, protection, improvement, and utilization of the National Forest System and for forest planning, inventory, and monitoring, all of which are non-research activities. We asked regional budget and fiscal officials to provide (1) justification for the charges to the National Forest System appropriation for the work of the Pacific Research Station scientists and (2) the criteria that they used to make this determination. These officials said that such a determination was not made and that they could not provide us with information on the types of tasks performed by the scientists with National Forest System funds. They also could not provide us with any criteria, such as agency guidance or procedures, that were available in 1995 to make such a determination. In effect, when the Research Station scientists requested National Forest System funds for work on the Tongass Land Management Plan, the Alaska Region provided the funds requested, but it did not determine if the activities funded were a proper charge to the appropriation. On March 4, 1998, the Alaska Region provided us with its final budget allocation for fiscal year 1998, and again we asked the budget officials for their justification for charges to the National Forest System appropriation for the work of the Pacific Northwest Research Station scientists, including the documentation required by the August 1997 revision to the Forest Service’s Service-Wide Appropriations Handbook. These officials said that such a justification was not made and that they had not complied with the documentation requirements of the Handbook. The Forest and Rangeland Research appropriation was provided by the Congress for the Forest Service’s research stations to conduct, support, and cooperate in investigations, experiments, tests, and other activities necessary to obtain, develop, and disseminate the scientific information required to protect and manage forests and rangelands, all of which are research activities. We asked the Pacific Northwest Research Station staff, including the Science Manager for the Tongass Land Management Plan team, to provide justification for the charges to the Research appropriation for the work of the Research Station scientists and the criteria used to make the determination. This official said that such a determination was not documented and that he could not provide us with documentation on the types of tasks performed using research funds. Also, the official could not provide us with any criteria to make such a determination. On March 4, 1998, the Research Station provided us with the estimated budget allocation for fiscal year 1998, and again we asked the Pacific Northwest Research Station’s Science Manager for justification for the charges to the Research appropriation for the work of the Research Station scientists, including the documentation required by the August 1997 revision to the Forest Service’s Service-Wide Appropriations Handbook. This official said that such a justification was not made and that the Research Station had not complied with the documentation requirements of the handbook, although it is in the process of developing a procedure to address the handbook’s requirements. The Department of Agriculture’s Office of Inspector General addressed a similar issue in its May 1995 report on the use of the National Forest System appropriation for research studies performed by the Forest Service’s research stations. The report pointed out that the Forest Service’s directives did not provide clear guidance for determining the type of reimbursable work that research stations could do for the Forest Service’s other units. According to the Inspector General’s report, this situation resulted in unauthorized augmentation of the Forest Service’s Forest and Rangeland Research appropriation. The Inspector General recommended that the Forest Service supplement its direction in its manual that provides guidance on the type of reimbursable work that research stations may perform for the Forest Service’s other units and establish procedures for reviewing the work that research stations perform for other units to ensure that it is in compliance with appropriations law and the direction in the manual. On August 28, 1997, the Forest Service issued an interim directive to its Service-Wide Appropriations Handbook that provides direction on jointly funded projects, including preparing financial plans and determining the appropriate funding allocations However, as of the date of our report, neither the Alaska Region nor the Research Station have complied with the August 1997 directive. Furthermore, because of the lack of documentation or adequate explanations, we could not determine whether the National Forest System and the Research appropriations were used appropriately or inappropriately in fiscal years 1995 through 1998. This type of documentation is particularly important when projects, such as the revision of the Tongass Land Management Plan and post-plan studies, are jointly funded by two appropriations that were provided for specifically different purposes, because the tasks funded by each must be identified and charged to the correct appropriation. Clearly, the use of one appropriation to accomplish the purpose of another is improper. It is imperative that the Forest Service in general and the Alaska Region in particular have procedures in place to ensure that appropriations are made available only for their stated purposes and that controls are in place to ensure that the procedures are used throughout the Forest Service. In our report, we recommended that the Chief of the Forest Service direct the Alaska Regional Forester and the Pacific Northwest Research Station Director to (1) fully comply with the Forest Service’s August 28, 1997, direction on special Research funding situations, which requires the preparation of financial plans and documentation of the determination of the appropriate funding allocations, and (2) establish procedures to ensure compliance with appropriations law Forest Service-wide. To date, we have not received the Forest Service’s statement of actions taken on our recommendations required by 31 U.S.C. 720. Mr. Chairman, this concludes our prepared statement. We will be pleased to respond to any questions that you or the Members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed: (1) the National Forest Service's Alaska Region's allocation of funds for its operating costs for fiscal year (FY) 1993 through FY 1998; (2) the nature, purpose, and allocation of centralized field costs and the steps the Alaska Region is taking to comply with the congressional limitation on the expenditures for the regional office and centralized field costs; (3) the rationale for and the distribution of regional reserve funds; and (4) whether the Forest Service's National Forest System and Research appropriations were used appropriately to pay for work performed by the Pacific Northwest Research Station in connection with the revision of the Tongass Land Management Plan and for post-plan studies. GAO noted that: (1) the Alaska Region's operating costs ranged from $108 million to $127 million annually during FY 1993 through FY 1997; (2) the region allocated 71 to 76 percent of these funds to field offices carrying out local programs, 13 to 17 percent for managing regional office operations, 4 to 7 percent for centralized field costs, 2 to 5 percent for regional reserves, and 2 to 4 percent for state and private forestry operations; (3) for FY 1998, the region's estimated allocations totalled about $106 million to carry out these regional programs; (4) until FY 1998, the Alaska Region used centralized field costs to manage certain programs or activities for the benefit of multiple offices; (5) the Forest Service's FY 1998 appropriations act limited the Alaska Regional Office's expenditures for regional office operations and centralized field costs to $17.5 million; (6) to comply with this legislative requirement, the Alaska Region eliminated the use of the centralized field cost category, included unallocated funds in regional reserve accounts until the funds are distributed to the field units, and separated the costs for state and private forestry operations from the operations of the regional office; (7) the Alaska Region establishes reserves because of the uncertainty about the timing or the amount of funds needed for certain projects; (8) once the specific amount or responsible unit is determined, the region distributes the necessary reserves to the unit responsible for making the payment; (9) any ending balance in the reserve category becomes the carryover amount for the next fiscal year; (10) beginning in FY 1995, both the Alaska Region's portion of the National Forest System appropriation and the Pacific Northwest Research Station's portion of the Forest and Rangeland Research appropriation funded the work performed by the Research Station scientists on the revision of the Tongass Land Management Plan and post-plan studies; (11) documentation of the rationale for decisions about the funding split for particular work performed by the research scientists could not be provided; and (12) GAO could not determine whether the National Forest System and Research appropriations were used appropriately or inappropriately for FY 1995 through FY 1998. |
For some time, we have been reporting that the U.S. financial regulatory system has relied on a fragmented and complex arrangement of federal and state regulators to oversee its institutions.place over the last 150 years—has not kept pace with major developments in financial markets and products in recent decades. In particular, the current system was not designed to oversee today’s large and interconnected financial institutions, whose activities pose new risks This system—put into to the institutions themselves as well as to the broader financial system. This risk to the broader financial system, called systemic risk, refers to the possibility that a single event could broadly affect the entire financial system, causing widespread losses rather than just losses at one or a few institutions. Given these observations and concerns, we offered a framework for crafting and evaluating regulatory reform proposals that would have the characteristics that should be reflected in any new regulatory system. For example, we said that a regulatory system should minimize regulatory burden and promote accountability. We also designated reforming the financial regulatory system as a high-risk area in 2009. FSOC’s three primary purposes under the Dodd-Frank Act are to 1. identify risks to the financial stability of the United States that could arise from the material financial distress or failure, or ongoing activities, of large, interconnected bank holding companies and nonbank financial companies, as well as risks that could arise outside the financial services marketplace; 2. promote market discipline by eliminating expectations on the part of shareholders, creditors, and counterparties of these large companies that the U.S. government will shield them from losses in the event of failure; and 3. respond to emerging threats to the stability of the U.S. financial system. To achieve these purposes, the Dodd-Frank Act gave FSOC a number of important authorities that allow it to collect information across the financial system so that regulators will be better prepared to address emerging threats; designate for supervision by the Federal Reserve those nonbank financial companies that pose risks to the financial system as defined by the act; designate as systemically important certain financial market utilities (FMU) and payment, clearing, or settlement activities, requiring them to meet prescribed risk management standards, and subjecting them to enhanced regulatory oversight; recommend stricter standards for the large, interconnected bank holding companies and nonbank financial companies designated for enhanced supervision; vote on determination by the Federal Reserve that action should be taken to break up institutions that pose a “grave threat” to U.S. financial stability; and facilitate information sharing and coordination among the member agencies to eliminate gaps in the regulatory structure. FSOC is chaired by the Secretary of the Treasury. As the chairperson of FSOC, the Secretary has certain powers and responsibilities related to FSOC’s meetings, rulemakings, recommendations, and reports and testimony to Congress. The Secretary, in consultation with the other FSOC members, is also responsible for regular consultation with the financial regulatory entities and other appropriate organizations of foreign governments or international organizations. As shown in figure 1, the Dodd-Frank Act provides that FSOC consists of 10 voting members and 5 nonvoting members. The 10 voting members provide a federal regulatory perspective and an independent insurance expert’s view. The 5 nonvoting members offer different insights as state-level representatives from bank, securities, and insurance regulators or as the directors of some new offices within Treasury—OFR and the Federal Insurance Office—that were established by the Dodd-Frank Act. The Dodd-Frank Act requires that the council meet at least once a quarter. The Dodd-Frank Act established OFR to serve FSOC and its member agencies by improving the quality, transparency, and accessibility of financial data and information, conducting and sponsoring research related to financial stability, and promoting best practices in risk management. The act requires OFR to set up a data center and a research and analysis center to, among other things, perform applied and essential long-term research; develop tools for risk measurement and monitoring; and make the results of its activities available to financial regulatory collect and provide data to FSOC and member agencies; standardize the types and formats of data reported and collected; agencies. FSOC and OFR do not receive appropriated funds. During the 2-year period following the enactment of the Dodd-Frank Act, the Federal Reserve provided OFR funds to cover the expenses of the office. Moving forward, OFR will be funded through assessments levied on bank holding companies with total consolidated assets of $50 billion or more and nonbank financial companies designated by FSOC for supervision by the Federal Reserve. Until FSOC finalizes its designations for nonbank financial companies, assessments will be levied only against large bank holding companies. The collected assessments will be deposited into the Financial Research Fund, which was established within Treasury to fund the expenses of OFR. FSOC’s expenses are considered expenses of OFR. The President’s fiscal year 2013 budget included estimates of about $123 million for the Financial Research Fund for fiscal year 2012 and about $158 million for fiscal year 2013. Most of these funds are to support OFR, but the estimates include about $8 million for FSOC operations in fiscal year 2012 and nearly $9 million in fiscal year 2013. Most of OFR’s funding is budgeted for contractual services—including reimbursable support from Treasury and administrative services from the Office of the Comptroller of the Currency (OCC) and the Bureau of Public Debt— employees and equipment. Key FSOC missions—to identify risks to U.S. financial stability and respond to emerging threats to stability—are inherently challenging. Risks to the stability of the U.S. financial system are difficult to identify because key indicators, such as market prices, often do not reflect these risks. Further, such threats do not develop in precisely the same way in successive crises, making them harder to identify. As FSOC’s chairperson acknowledged in FSOC’s 2011 Annual Report, the most significant threats to the stability of the financial system will often be the ones that are hardest to diagnose and preempt. Moreover, financial innovations that are not well understood further complicate the challenge. For example, prior to the 2007-2009 financial crisis, some experts viewed the risks associated with falling housing prices as a regional phenomenon. With the advent of mortgage-backed securities, these experts believed that the danger that falling house prices posed on the regional level had been mitigated, as they thought these securities had diversified and dispersed the risks. Although this dispersion of risk was expected to limit the impact of regional downturns, it helped to transmit the downturn in housing prices across the financial system and the nation. Experts have also noted that the task of effectively monitoring and mitigating systemic risk is both vast and procedurally complex. Additionally, actions to preemptively mitigate threats may appear unnecessary or too costly at the time they are proposed or taken. Although achieving FSOC’s key missions is inherently challenging, failure to achieve them will continue to leave the financial system vulnerable to large or multiple shocks that could result in the large losses in asset values, higher unemployment, and slower economic growth associated with previous financial crises. Although the Dodd-Frank Act created FSOC to provide for a more comprehensive view of threats to U.S. financial stability, it left most of the fragmented and complex arrangement of independent Federal and State regulators that existed prior to the Dodd-Frank Act in place and generally preserved their statutory responsibilities. As a result, FSOC’s effectiveness hinges to a large extent on collaboration among its many members, almost all of whom come from state and federal agencies with their own specific statutory missions. In testifying on the coordination of Dodd-Frank rulemakings assigned to specific FSOC members, before the U.S. House Financial Services Committee in October 2011, the chairperson of FSOC recognized this challenge. He noted that the coordination challenge in the rulemaking process was hard because the Dodd-Frank Act left in place a financial system with a complicated set of independent agencies with overlapping jurisdictions and different responsibilities. However, the Chairperson also noted that certain agencies were working much more closely together than they did before the creation of FSOC. In our prior work, the federal financial regulators also emphasized the importance of maintaining their independence while serving as members of FSOC. For example, several FSOC member agencies noted in our prior work on Dodd-Frank rulemakings that any effort to coordinate rulemakings assigned to specific agencies through FSOC would need to be balanced against the statutory requirements of the independent agencies involved. In addition, the Chairperson has similarly noted that he does not have the authority to force agencies to coordinate, and neither he, nor FSOC as a whole, can force agencies to adopt compatible policies and procedures. FSOC members’ staffs and staff at member agencies also noted that differences in policies and procedures are designed to address the differences in the entities they regulate. Regulators have also pointed to their differing statutory requirements to explain why they have differing views on policy issues. During the Basel II deliberations, for instance, U.S. bank regulators—the Federal Deposit Insurance Corporation (FDIC), Federal Reserve, and OCC—each had a different view of various aspects of those The regulators traced their differences back to their requirements.specific statutory responsibilities. Furthermore, although the United Kingdom (UK) and the European Union (EU) have established or are in the process of establishing councils to oversee systemic risk, in the UK and the EU the central bank has more members or more votes than other entities on these councils. In contrast, in the United States, the central bank—the Federal Reserve—has one member on FSOC and one vote among the 10 voting members. FSOC policy staff and staff at member agencies noted that the diverse perspectives of FSOC members enrich FSOC deliberations. OFR also faces the challenge of trying to build a world-class research organization from the ground up while meeting shorter term goals and responsibilities. Recognizing these difficulties, the Dodd-Frank Act required that OFR submit annual human resource planning reports to Congress that cover the new entity’s plans for recruitment and retention, training and workforce development, and workforce flexibility. The September 2011 plan stated that a key feature of the recruitment message was to highlight OFR’s ability to engage top academic and industry professionals through several unique opportunities. These included the ability to work in innovative research networks and with unique data sets, as well as the historic opportunity to be involved from the beginning in a new institution with broad, challenging goals. OFR recognizes the challenge of attracting and retaining highly trained staff, who often have other employment alternatives. When asked about challenges they face, OFR officials noted that one challenge to starting a research organization that is an unknown entity derives, in part, from some prospective employees wanting to see which other researchers are in place before agreeing to an employment offer. OFR officials told us that the organization is making steady progress toward reaching a point at which it will have an established core of staff and greater name recognition that will lessen this challenge. Those researchers who supported the creation of OFR have suggested that it will take many years for the new entity to provide the insights that will ultimately be expected of it. These researchers have also noted that the absence of a director for the organization has slowed this process. At the same time that OFR faces the long-term challenges of building a world-class research organization, it also faces the challenge of balancing this longer- term goal with the need to meet shorter-term goals such as providing ongoing support to FSOC and standardizing the types and format of data collected and reported by the financial regulatory agencies. FSOC and OFR have taken steps toward meeting the challenges they face, including setting up their management structures, communicating their mission and goals, and hiring staff. However, both entities could enhance their accountability mechanisms and level of transparency. FSOC and OFR have also taken steps to build mechanisms to identify potential threats to financial stability, but additional actions would strengthen this key mission of both entities. Additionally, while FSOC and OFR have developed web pages on Treasury’s website and taken other steps to provide information to the public, these efforts have limitations and do not always fully inform Congress or the public about their activities and progress. Without taking additional steps to improve accountability and transparency, FSOC and OFR are missing the opportunity to demonstrate their progress in carrying out their missions. As we have reported in the past, agencies can manage or mitigate many of the challenges of setting up new organizations by developing strong management structures and control mechanisms. The literature on control mechanisms and government performance suggests that certain mechanisms, such as setting out goals and linking staffing, activities, and budgets to them, are key even when new agencies are being formed.Such control mechanisms provide management, staff, stakeholders, and the public with a good understanding of the organization’s mission and goals, the steps it intends to take to carry out those goals, and an ongoing level of accountability. Agencies also need to establish measures to gauge their performance so that they can change strategies that are deficient in a timely manner. Organizations must also maintain an appropriate level of transparency. Because certain agencies rely on confidential information, such as that obtained during regulatory supervision, an appropriate level of transparency recognizes the need to maintain confidentiality and information security. In addition, agencies must balance the need for transparency with the need for those involved in deliberations to be able to express their views. FSOC has begun setting up its management structures. It has established a dedicated policy office within Treasury’s Office of Domestic Finance, led by a Deputy Assistant Secretary, which functions as the FSOC Secretariat. Among other duties, the policy office works with staff of other FSOC members to support FSOC in its day-to-day operations by helping to draft rules, studies, and reports and prepare and circulate relevant materials to agency members prior to council meetings. The office also serves as a mechanism to bring issues to the council quickly. As of June 2012, there were 25 staff members in the FSOC policy office. FSOC has established seven standing committees generally composed of staff of its members and member agencies to carry out the business of the council including developing the information the members need to make decisions effectively. The Deputies Committee, which meets every 2 weeks and consists of senior officials designated by members, is responsible for coordinating and overseeing the work of the staff committees. The deputies may resolve issues that arise in the other committees and determine the information that needs to be passed on to the FSOC members for discussion. Some other committees include the Systemic Risk Committee that analyzes emerging threats to financial stability, designations committees that support FSOC in evaluating FMUs and nonbank financial companies for certain additional oversight, and the Data Committee that supports OFR’s data collection efforts. FSOC policy staff stated that all members and member agencies were invited to have staff participate on any committee and, in some cases, FSOC members attend committee meetings as well. They also noted that ad hoc staff groups were formed periodically to work on issues that might not fit within the purview of a standing committee. For example, an ad hoc group helped draft FSOC’s 2011 Annual Report, and an ongoing legal working group holds conference calls as needed to address legal issues. OFR has also taken steps to set up needed management structures. As shown in figure 2, OFR has developed an organizational structure that is built around a Data Center and Research and Analysis Center—the two programmatic units established by the Dodd-Frank Act. OFR has adopted certain hiring policies required under the Dodd-Frank Act, including special salary schedules that are higher than the General Schedule, and used Treasury’s existing authority from the Office of Personnel Management for Schedule A excepted hiring. In testimony delivered to the House Financial Services Subcommittee on Oversight and Investigations, in April 2012, the Chief Operating Officer described plans to build up to a staffing level of from 275 to 300 staff in the next 2 to 3 years. OFR officials noted that they had relied on a variety of tools to solicit applicants, including letters to academic institutions. OFR’s recruitment message has highlighted the opportunity to work on unique data sets and the historic opportunity to build a new institution that would promote financial stability. As of August 15, 2012 OFR had 112 employees. About three-quarters of these employees were direct hires (including 22 reimbursable staff from other Treasury departments) with the other quarter a combination of external detailees and student interns. Although this level is below the target employment level in OFR’s budget, it represents marked progress from the second quarter of 2011, when OFR had seven employees and relied mostly on nonpermanent staff. As it is for any agency, having effective leadership is critical to hiring qualified staff and providing effective governance. OFR has filled five of its eight top leadership positions, but two of the most important positions are not permanently filled: the OFR director and the deputy director of the Research and Analysis Center. A former Treasury official with knowledge of the search process for the director position said that it was difficult to attract a qualified candidate to head the agency for a 6-year term. After 17 months, the President put forth a nominee to head OFR in December 2011 who had been serving as the Counselor to the Secretary since April 2011 and continues to serve in that position. As of July 2012, the Since OFR has also not nominee is awaiting full Senate confirmation.filled the deputy director position at the Research and Analysis Center, the Chief of Analytical Strategy has assumed responsibility for standing up the Research and Analysis Center, including overseeing the hiring process, determining data needs, and defining the center’s objectives and strategy. In June, OFR filled the Data Center Deputy Director position by promoting the Chief Business Officer to this position. The new Data Center Deputy Director will continue to serve as the Acting Chief Business Officer until this position is filled. In addition to these vacancies, a number of lower-level management vacancies remain, including positions at the Research and Analysis Center. For example, none of the assistant director positions under the Deputy Director of Research and Analysis have been filled. In addition, two of five assistant director positions under the Chief Technology Officer are open. However, OFR is not actively looking to fill one of the assistant director positions until the office reaches a mature state and has the need for this additional position. FSOC has implemented policies such as bylaws, a transparency policy, and a consultation framework, and members have signed a memorandum of understanding (MOU) on sharing confidential information to govern FSOC activities and promote accountability and transparency. OFR has also adopted policies and procedures for its operations. FSOC’s bylaws describe the duties of the Chairperson, members of the council, and staff; provide the governance structure for council meetings; and describe some policies for confidentiality and access to information, among other things. The bylaws also allow the Chairperson to appoint, with council approval, an Executive Director and Legal Counsel and to delegate some responsibilities to the Executive Director. As of July 2012, no one had been appointed to these positions. Instead, Treasury staff perform duties associated with these positions, such as providing ethics information to FSOC members. The transparency policy commits FSOC to holding at least two open meetings per year but also establishes reasons why other meetings might be closed. For example, meetings may be closed during discussions of supervisory or other market-sensitive information or if an open meeting would result in the disclosure of information contained in investigation, examination, operating, or condition reports; or would necessarily and significantly compromise the mission or purposes of FSOC if it were disclosed. The policy also states that FSOC will release minutes to the public after the meetings. FSOC’s framework for consultation applies to regulations or actions required by the Dodd-Frank Act that must be completed in consultation with FSOC. For example, under the act, the Securities and Exchange Commission (SEC) must consult with FSOC in determining what information is to be collected from certain investment advisers to private funds relating to the assessment of systemic risk. The framework provides a timeline for holding initial meetings, circulating and commenting on staff recommendations, and briefing key policy staff of interested FSOC members on those recommendations. FSOC members also signed an MOU to help ensure confidentiality of nonpublic information. The MOU requires that FSOC members not share this information with anyone outside their member agencies or otherwise specified support staff. Treasury officials and FSOC staff said that the MOU is necessary because it enables council members to share nonpublic information within the council and provides assurance to covered staff that they will not incur penalties for sharing information consistent with the MOU’s terms. They noted, for example, that sharing certain supervisory information without such an agreement could carry severe penalties. In June 2012, CIGFO released a report on FSOC’s controls over nonpublic information.CIGFO found that to date FSOC had shared limited nonpublic information, but this situation will change as OFR builds its capacity. CIGFO also found differences in the way FSOC member agencies marked and handled nonpublic information and noted that not addressing these differences could pose risks to the senders and receivers of such information. The FSOC Data Committee has undertaken a project to address these issues. OFR has adopted policies and procedures for its operations, including those for data security, human resources, budget execution, and procurement. Because data operations are an important feature of OFR’s operations, OFR officials said that they had spent significant time on data security architecture, looking at issues of confidentiality, user access, and cyber threats such as hacking. OFR has adopted Treasury procedures for ensuring data security and is expanding its security controls as necessary for OFR-specific systems and data, as well as for information sharing across FSOC member agencies. To further ensure confidentiality, they have also adopted postemployment restrictions, as required by the Dodd-Frank Act, stating that OFR employees generally may not be employed by or provide advice or consulting services to financial companies for one year after if they have had access to certain confidential information. In addition, FSOC has some planning under way and OFR has taken some actions and planned others that are consistent with legal requirements or leading practices for new organizations relative to strategic planning and performance management. In our prior work, we have identified three key steps for successful results-oriented organizations—(1) defining clear missions and desired outcomes; (2) measuring performance to gauge progress; and (3) using performance as a basis for decision making. These practices are consistent with the Government Performance and Results Act of 1993, as amended (GPRA), which requires agencies to periodically produce strategic plans, annual performance plans, and performance updates. FSOC, which is subject to GPRA, is in the early planning stages of how to satisfy its requirements and may, given its relatively small monetary outlays, request an exemption from certain GPRA requirements from the Office of Management and Budget. In the interim, Treasury’s strategic plan for fiscal year 2012-2015 describes FSOC and the Treasury Secretary’s role as FSOC chairperson, but it does not include information on FSOC’s goals or how it will measure FSOC’s progress in achieving them. OFR, which is not independently subject to GPRA, also received limited discussion in Treasury’s 2012-2015 strategic plan. Specifically, the plan notes only that Treasury’s Office of Domestic Finance supports OFR and that OFR was created by the Dodd-Frank Act. Similar to other entities within the Treasury such as the Bureau of the Public Debt and the Internal Revenue Service, and consistent with leading practices for new organizations, OFR is undertaking an independent strategic planning and performance management effort. OFR issued a strategic framework in March 2012 to cover fiscal years 2012-2014. In the strategic framework OFR lists five strategic goals, including supporting FSOC through the secure provision of high-quality financial data and by conducting the analyses needed to monitor threats to financial stability; developing and promoting data-related standards and best practices; and providing the public with key data and analyses while protecting sensitive information. The framework also highlights a number of objectives under those goals and lays out implementation priorities for the first year covered by the document, fiscal year 2012. The framework also notes the importance of transparency and that OFR is subject to oversight from the Treasury Office of the Inspector General (OIG) and GAO, which have both exercised that authority during OFR’s first two years, and that the Dodd- Frank Act requires that the OFR Director testify before Congress annually on OFR’s activities. However, OFR acknowledges within its framework document, that it does not yet have certain other key elements of performance management in place including linking programmatic, human resources, and budgetary decision making to its strategic goals and developing a performance measurement system. The framework identifies establishing these elements of a performance management system among its fiscal 2012 priorities. OFR officials told us that they have begun to link its budget and human resources to strategic goals and that the human resources plan to be submitted to Congress in September 2012 and the fiscal year 2014 budget submission to be issued in 2013 will reflect these linkages. They added that at the time they issued the framework, they were not in a position to include performance measures, as the agency was not sufficiently established. However, they plan to include performance measures in their fiscal year 2014 budget submission. In June 2012, the Treasury OIG issued a report on the progress OFR had made in developing an implementation plan that lays out how it will stand up all of its operations and also noted the need to develop performance measures.such performance measures, neither the agency nor the public can determine whether OFR’s expenditures and activities are most effectively aimed at accomplishing its mission. The Dodd-Frank Act requires individual members to submit a signed statement to Congress to accompany many FSOC reports saying whether they believe that FSOC, the government, and the private sector are taking all reasonable steps to ensure financial stability and mitigate systemic risk that would negatively affect the economy. If they did not believe this the statement would need to indicate what actions the member believes should be taken. 12 U.S.C. § 5322(b). frames are specified. More specifically, in the 2011 Annual Report some recommendations identified relevant parties only as “market participants” or “regulators” but did not consistently identify the targets of the recommendation or designate parties responsible for monitoring or implementing them. Another recommendation only discusses Dodd-Frank Act reforms, while several others express support for certain aspects of international coordination on financial reforms. In the 2012 Annual Report, FSOC adds some specificity to these recommendations, such as recommending an expeditious implementation of the Dodd-Frank Act. In the 2012 Annual Report, FSOC also more clearly identifies recommendations starting each one with “the council recommends,” but it still does not consistently designate an FSOC member or members to monitor or implement the recommendations nor does it establish time frames for certain actions such as reporting to the council on the status of the recommendation. Treasury officials noted that the Dodd-Frank Act did not give the chairperson or council authority to require that independent regulators take action or impose time frames on them. However, they noted that some recommendations in the 2012 Annual Report were made to specific agencies and put greater stress on more immediate action than others. For example, the report emphasized the importance of a recommendation to SEC to take action to address money market fund risks by saying that wholesale short-term funding markets are a critical component of a well-functioning financial system, and FSOC continues to be focused on structural vulnerabilities in money market funds that could disrupt these markets. Enhancing FSOC’s accountability could lead to more effective oversight and public confidence in financial institutions and markets. In addition, while FSOC releases minutes from its meetings, as required by its bylaws, it does not keep detailed records of deliberations or discussions that take place at these meetings or at the committee level. While no specific level of detail is required for FSOC minutes, the limited documentation of their discussions makes it difficult to assess FSOC’s performance. Another deliberative body, the Federal Reserve’s Federal Open Market Committee, keeps transcripts of its meetings and voluntarily releases these transcripts to the public after 5 years. Releasing the transcripts after a period of time should allow the members of the committee to talk freely and provides documentation that can be used to assess the entity’s performance and monitor their decision making process. FSOC has taken steps to meet its statutory responsibilities related to identifying risks and potential emerging threats to U.S. financial stability, but has not yet developed comprehensive and systematic mechanisms to realize these goals. These steps include setting up the Systemic Risk Committee that is responsible for systemic risk monitoring and plays a key role in reviewing sources of systemic risk. Potential threats to financial stability are also discussed at FSOC meetings; for example, FSOC officials noted that a teleconference was convened to discuss MF Global. The Systemic Risk Committee generally meets every 2 weeks and is co-chaired by the Commodity Futures and Trading Commission (CFTC), FDIC, the Federal Reserve, and SEC. The committee is operating under draft procedures in which member agency staff suggest risks or threats that, in their view, may benefit from interagency coordination. In December 2011, FSOC members’ staff provided 40 suggestions, which FSOC policy staff grouped into categories for discussions at the committee’s monthly meetings. According to FSOC policy staff, if there is agreement that an issue would warrant further examination, an agency is assigned to develop the issue, including identifying vulnerabilities in the financial system. When the committee determines the issue is sufficiently developed, it presents the issue to the Deputies Committee. Sending some issues to the Deputies Committee sooner than others does not imply that the committee attaches greater importance to the issue but only that enough analysis has been completed to allow it to move forward. According to the draft procedures, if issues are elevated beyond the Deputies Committee to FSOC members, agencies may respond with a variety of actions, including enhanced monitoring, additional analysis, the development of potential policy responses, or the implementation of a particular policy response. OFR participates in the Systemic Risk Committee and is building capacity to monitor the financial system for threats to financial stability. OFR has developed the Financial Stability Monitor, a collection of metrics and indicators related to financial stability that is to be continuously updated, according to OFR and Treasury officials. According to these officials, OFR began sharing the Financial Stability Monitor with the Systemic Risk Committee and FSOC member agency staff in February 2012. OFR is assessing options for analyzing risks to financial stability and produced a working paper in collaboration with outside researchers, published in January 2012, to survey existing approaches. In addition, OFR and FSOC sponsored a conference in December 2011 to discuss data and technology issues and analytical approaches for assessing threats to financial stability. Such a data-sharing exercise is akin to what the International Monetary Fund proposes with its Financial Soundness Indicators. See International Monetary Fund, Financial Soundness Indicators: Compilation Guide (Washington, D.C.: March 2006). reveal patterns occurring across the financial system. OFR, through a mechanism such as the Financial Stability Monitor, could play a role in collecting, analyzing, and reporting on these indicators. A senior OFR official told us that this was the ultimate intent of the Financial Stability Monitor. A sample of the Financial Stability Monitor that we reviewed included, among other topics, some indicators of leverage and liquidity that were based on data from the federal banking agencies and purchased databases. Many analytical tools have been developed by researchers or are in use by international bodies to assess the risk of a financial crisis or identify vulnerabilities in the financial system. Some of these tools—such as early warning models—can be useful to assess the overall level of risk in the financial system, while others, such as system-wide stress testing, could be helpful in identifying new vulnerabilities and interconnections (see fig. 3). In general, these tools are methods of integrating large volumes of financial information to generate specific insights about financial stability. Experts we spoke with were generally supportive of developing and using such tools for monitoring risks to financial stability and also emphasized the importance of using multiple tools. These tools, which all have useful features as well as shortcomings, may complement each other, and exploring a variety of tools will provide insight into which ones will be the most effective. According to FSOC policy staff, FSOC has not formally considered whether to develop early warning models or conduct system- wide stress tests. OFR staff said they are evaluating a range of metrics and methods that had been proposed for measuring and analyzing financial markets and systems and are in the early stages of developing network maps and other tools to assess financial stability. OFR evaluated 11 measures against a series of crises over time and reported on some of In addition, OFR has a statutory these efforts in its 2012 Annual Report.responsibility to report on stress testing, and OFR officials told us that they interpreted that responsibility as contributing to the development and evaluation of quantitative tools that are used in stress tests, improving the data used in stress tests, and helping to advance the state-of-the-art in stress test methodologies. As such, OFR’s survey of systemic risk approaches described several stress testing models and the OFR Annual Report noted that methodologies will need to advance to expose vulnerabilities in the financial system as a whole. OFR would support FSOC in evaluating system-wide stress tests, according to OFR officials. Although FSOC and OFR have adopted communication methods to provide information to the public and Congress on their activities, some of their methods could be strengthened. For example, both entities have web pages on Treasury’s website. FSOC’s web pages include minutes of the council’s meetings, annual reports, frequently-asked-questions, and information on FSOC rulemakings. OFR has also posted key documents on its web pages, including its annual report, strategic framework, and updates on recent developments, such as the status of the legal entity identifier. OFR’s first annual report discusses its activities and agenda for the next year, and its approach to researching financial stability as well as current threats. The annual report also covers other topics including data gaps in the areas of leverage, liquidity, and interconnectedness as well as the benefits of data standards. Treasury officials also provided us with examples of emails they have sent to congressional committees on key FSOC or OFR products or actions, such as the designation of nonbank financial companies, and described the wide range of correspondence they respond to on congressional inquiries involving both entities. FSOC releases the minutes of its meetings. However, the minutes describe general agenda items for the meetings and information on the presenters for each agenda item and lack additional detail even when the information being discussed is not likely to be market sensitive or limit the quality of deliberations. For example, the meeting minutes for October 11, 2011, show that several presentations were given during an executive session, including one on money market reform updates. The minutes provide the names of those who gave the money market presentation and note that updates were provided on actions taken since the last presentation on the topic. However, the minutes lack any content of the actual presentation or discussion. Specifically, the minutes say the following. “Money Market Fund Reform Update “The Chairperson then turned to the agenda item regarding an update on money market fund reform and called on Robert Plaze, Associate Director, Division of Investment Management, SEC, Matthew Eichner, Associate Director, Division of Research and Statistics, Federal Reserve, Matthew Rutherford, Deputy Assistant Secretary for Federal Finance, Treasury, to make the presentation. The individuals listed above provided the presentation which included a review of the actions taken since the last presentation regarding money market fund reform to the Council, the reform options under consideration, and next steps. The members of the Council asked questions about the presentation.” In addition, in our review of meeting minutes for meetings held from October 2010 through December 2011, we found that minutes from more recent meetings generally have less detail than those from earlier ones. As a result, the public receives little information about FSOC’s activities and deliberations, which limits the public’s understanding of its activities. More recently, however, FSOC provided additional transparency on a key decision—its 2012 Annual Report provides extensive information on the designation of FMUs as systemically important, including the names of the designated entities and a discussion of the reasons for their designation. FSOC policy officials acknowledged that the issue of transparency is challenging. They recognized the need for transparency but also noted that certain information is protected from disclosure under various statutes and cannot be released. FSOC staff also noted the need to balance the desire for transparency with the need to provide an environment that allows for open discussion and deliberation of issues and policy options. As we have previously reported, transparency is a key feature of accountability even when there is a need to safeguard certain sensitive information to protect the marketplace. In addition, the Freedom of Information Act (FOIA) recognizes that deliberative processes also need to be safeguarded so that decision makers can have meaningful discussions and certain information that FSOC considers, such as that collected by bank supervisors, is prohibited from public disclosure. However, similar bodies, such as the Federal Open Market Committee and the Interim Policy Committee in the UK—an entity that has a similar role to FSOC’s—publishes minutes that provide greater detail. Although the Federal Open Market Committee makes certain announcements on the day it meets, after a 3-week delay, it publishes its more detailed meeting minutes, which usually include a detailed discussion of developments in financial markets and the economy, committee member views, and an explanation of committee policy actions. In addition, as noted earlier in this report, 5 years after its meetings, the Federal Open Market Committee voluntarily releases transcriptions of those meetings. The minutes of the Interim Policy Committee in the UK provide information on what the committee discussed and an update on the health of the economy, including threats to financial stability. Additionally, neither FSOC nor OFR has taken full advantage of modern communication tools to communicate information about their activities or progress. While using a search engine, such as Google, identifies web pages for both entities on Treasury’s website, the pages are not easy to locate from Treasury’s homepage nor are the sites user friendly. For example, the FSOC and OFR web pages are in a section of Treasury’s website called Initiatives and are further embedded under a section titled Wall Street Reform. While FSOC does post the dates and times of its meetings on its web page, this information is in a link at the bottom of the page rather than being prominently displayed. In addition, FSOC does not have an online service that regularly alerts interested parties to changes to its web pages or upcoming meetings. Further, one member of FSOC noted that FSOC relies on emails to members and staff rather than having a portal where members can access needed information more easily and securely. OFR’s web pages have been evolving over time especially during July and August of 2012, but it could further improve its website. For example, we had noted in June 2012 that OFR did not consistently display agency testimonies in the same place; in August 2012, OFR rectified these omissions. Similarly, in March 2012, OFR told us that the Treasury daily blog provided information about OFR activities, but timely notices relevant to OFR have not regularly appeared there. For example, many of the recent developments related to global approval of the legal entity identifier, which OFR cites as a major endeavor, have not appeared in a timely manner. In July 2012, OFR added an online service to its own web pages to inform those who register that updates have been made to the site, but there was a delay in having the feature work consistently. However, OFR has not posted some information that would show the progress the agency is making in standing up its operations, such as its organization chart including the names of its top managers. In comparison, the Consumer Finance Protection Bureau, which was also created by the Dodd-Frank Act, has for some time had its own domain name, an easily identifiable website that includes an organization chart, and online services that provide regular updates to interested parties. Some industry representatives, academics, and former government officials have questioned the progress that the new entities have made. Some industry representatives with whom we spoke said that they did not believe that FSOC and OFR had met their expectations for streamlining regulatory requirements (e.g., responding to data requests), improving coordination on new regulations, or providing new information on systemic risks. Some members of FSOC and their staffs said that they learned a great deal from working on and reading the 2011 Annual Report; however, some industry representatives with whom we spoke said that they did not find that report useful. Among other concerns, industry representatives told us that the report did not contain any new information on systemic risks or the status of Dodd-Frank act reforms. In addition, a group of former government officials, academics, and industry representatives convened the Systemic Risk Council in June 2012 to address concerns that they said stemmed, in part, from the lack of progress made by the members of FSOC and OFR. They said that their concern increased each day that the implementation of systemic reform languished. The Systemic Risk Council also noted that it was essential for FSOC to provide clear and transparent explanations of regulatory reforms in a way that the general public could understand. Communicating more effectively with groups critical to their missions and the public could improve FSOC’s and OFR’s ability to effectively and efficiently achieve their missions. The Dodd-Frank Act recognizes the importance of collaboration and requires FSOC and OFR to collaborate on various activities. Effectively building mechanisms to identify risks and potential emerging threats to U.S. financial stability will also require FSOC and OFR to collaborate with a number of internal and external stakeholders. To date, FSOC and OFR have taken steps to promote collaboration; however, they could enhance collaboration by more fully incorporating some key elements of effective collaboration. Taking full advantage of opportunities to work with stakeholders could strengthen FSOC’s and OFR’s ability to carry out their missions. For example, in testifying about the need to coordinate agency rulemakings, FSOC’s Chairperson noted the importance of coordinating both domestically and internationally to prevent risks from migrating to regulatory gaps—as they did before the 2007-2009 financial crisis—and to reduce U.S. vulnerability to another financial crisis. In addition, effective collaboration could eliminate unnecessary duplication for both the industry and regulators. Recognizing the importance of collaboration to FSOC’s and OFR’s missions, the Dodd-Frank Act assigns specific collaboration duties and responsibilities to these new entities. Title I of the act directs FSOC to facilitate information sharing and coordination among its member agencies and other federal and state agencies regarding domestic financial services policy development, rulemaking, examinations, reporting requirements, and enforcement actions. In addition, FSOC must consult with the primary financial regulatory agency, if any, before designating a nonbank financial company for supervision by the Federal Reserve. The Dodd-Frank Act also encourages collaboration between FSOC and external stakeholders, especially state regulators and international entities. For example, it permits FSOC to appoint technical and professional advisory committees which could include industry representatives and academics as well as state regulators that may be useful in carrying out the council’s functions. The act eases the creation of committees by generally exempting them—and FSOC—from the Federal Advisory Committee Act (FACA), which requires agencies to adhere to a formalized process to ensure that committees are objective and accessible to the public. The act also directs the FSOC chairperson, in consultation with FSOC members, to regularly consult with financial regulatory entities and other appropriate organizations of foreign governments or international organizations on matters relating to systemic risk to the international financial system. Further, when designating foreign nonbank financial companies for supervision by the Federal Reserve, FSOC must consult with appropriate foreign regulatory authorities, to the extent appropriate. The Dodd-Frank Act also specifies a number of duties for OFR that require collaboration with FSOC members and others. In particular, OFR must collect data on behalf of FSOC, provide the data to FSOC and member agencies, and standardize data collection among the agencies. These activities require collaboration not only with FSOC member agencies but also with commercial data providers, publicly available data sources, and the financial industry. In addition, like FSOC, OFR can appoint technical and professional advisory committees to help leverage necessary resources, but these are not exempt from FACA. The Dodd-Frank Act provides that member agencies, in consultation with OFR, must implement regulations promulgated by OFR to standardize the types and formats of data reported and collected on behalf of FSOC. However, it also explicitly notes that this provision does not supersede or interfere with the independent authority of a member agency under other law to collect data in such format as the agency requires. developed good working relationships with staff from other agencies. Prior to FSOC, two means of collaborating were the President’s Working Group on Financial Markets and the Federal Financial Institutions Examination Council. Members’ staffs noted that communication within FSOC had been broader and deeper than in either of those forums because staff from more agencies participate in FSOC at various levels. For example, staff said that various FSOC committees and working groups have allowed staff to develop contacts at other agencies with whom they can consult and share information on a variety of topics. In addition, staff said that through FSOC they had become acquainted with others having different expertise and have had the opportunity to share their views and experience with others. For example, the independent insurance member and his staff noted that they had used FSOC as a forum to provide information on insurance companies’ use of money market funds, which may differ from the more common retail fund mechanisms. Through the committee structure, FSOC members’ staffs also noted that agencies had leveraged their joint expertise and resources to carry out FSOC’s statutory responsibilities, including rulemakings. For example, an ad hoc interagency lawyers group was formed shortly after the passage of the Dodd-Frank Act to provide regular input into the rulemaking process. The standing committees also provide input on rulemakings dealing with issues within their areas of expertise. In addition, throughout the rulemaking processes the Deputies Committee was briefed regularly, especially on issues that could not be resolved in other committees or working groups. The deputies kept their respective FSOC members informed throughout the rulemakings. We discussed with Treasury staff the FSOC chairperson’s consultations with financial regulatory entities and other appropriate organizations of foreign governments or international organizations on matters relating to systemic risk to the international financial system. Treasury staff noted that the FSOC chairperson, who is also the Secretary of the Treasury, has regular contact with foreign officials and shares information from these interactions with other U.S. regulators at FSOC meetings. They said that they have monitored these activities and believe that FSOC is complying with the Dodd-Frank requirement. Further, OFR has taken some actions to collaborate by leveraging the expertise of external stakeholders and coordinating U.S. activities internationally. In particular, FSOC and OFR held a joint conference in December 2011 to discuss data and technology issues and analytical approaches for assessing threats to financial stability. The conference included attendees from the financial regulatory community, academia, public interest groups, and the financial services industry. OFR has also initiated a working paper series in which OFR researchers have collaborated with outside academics to catalog systemic risk monitoring systems and ways to improve risk management at financial institutions. In addition, OFR has invited experts on various aspects of financial stability to give seminars to OFR and FSOC policy staff. OFR has also announced plans to create the Financial Research Advisory Committee to solicit advice, recommendations, analysis, and information from academics, researchers, industry leaders, government officials, and experts in the fields of data and technology. Applications were due in April 2012, and in August 2012 OFR officials said that the list of applicants was in the final stage of review. OFR officials also noted that they play a key role on FSOC’s Data Committee, which supports coordination of and consultation on issues related to FSOC data collection and sharing. In addition, OFR is working to standardize data reporting systems among FSOC member agencies. OFR officials noted that the agency had begun a three-stage process to assemble an inventory of data collected by FSOC member agencies as a first step toward standardizing data, reducing duplication, and eventually lowering costs for industry and regulators. The three stages examine data (1) purchased by the agencies, (2) collected through regulatory activities, and (3) derived by the agencies from data they purchased or collected. An OFR official said that the first phase was complete but had taken longer than initially envisioned because of the complexities of the project including agencies’ use of different terminology for the same databases. For instance, the official noted that it had been difficult to create an effective survey instrument to capture the data purchased by the agencies, because the survey instrument had to capture the different terminology used by the various agencies. OFR officials said that they expected the process to allow them to determine when multiple agencies used the same data, identify data gaps more effectively, and seek potential savings in data acquisition. For example, they have been able to negotiate contracts that provide the small office of the independent insurance member of FSOC with access to expensive private databases. OFR officials said that OFR is also working with FSOC member agencies through FSOC’s Data Committee to address differences in existing security classification systems and support efficient, secure data-sharing efforts given the statutory responsibilities agencies have to ensure the confidentiality of certain data. Many industry representatives with whom we spoke said that this project could help relieve regulatory burdens by standardizing data- reporting systems and reducing duplication, noting that currently multiple agencies ask for the same data but in different formats or at different times. Moreover, OFR has collaborated with industry, foreign government entities, and international bodies to create a legal entity identifier, which OFR describes as an emerging global standard that will enable regulators and companies around the world to quickly and accurately identify parties to financial transactions. Building on earlier industry and interagency efforts and on CFTC’s and SEC’s responses to mandates on data standards, OFR led U.S. government efforts to promote global adoption of the identifier. Within the expert group appointed by the Financial Stability Board to develop recommendations for the Group of 20 (G20) countries regarding the identifier, OFR led the U.S. consultative group, and OFR staffers have been leading singly or jointly the development of a governance framework and operating protocols. The Financial Stability Board endorsed the expert group’s recommendations in May 2012. In June 2012, the G20 endorsed the proposal, which includes a target for implementing a legal entity identifier system globally, with some allowance for variation across countries, by March 2013. OFR continues to serve as the vice-chair for the Americas on the group charged with implementing the identifier. While the previous examples show the progress FSOC and OFR have made in terms of promoting collaboration, other examples suggest that additional actions are needed. In our prior work, we have identified practices that agencies can use to enhance and sustain their collaborative efforts. These include identifying and addressing needs by leveraging resources, agreeing on roles and responsibilities, and establishing mutually reinforcing or joint strategies. The examples below highlight areas in which FSOC’s collaboration efforts could be enhanced by more fully reflecting these principles. Leveraging resources. FSOC has not taken advantage of opportunities to leverage resources through its authority to appoint technical and professional advisory committees. In addition to state regulators and council members, the Dodd-Frank Act specifies that such committees could include other persons. Such persons could be those from the industry and academics. Industry representatives have commented on the benefits of having industry input through such a committee, but, to date, FSOC has not established such committees. Moreover, the ability of FSOC members to leverage expertise varies. For example, while FSOC members from federal regulatory agencies are able to draw on staff from across their agencies, the independent insurance member and state representatives have limited support structures. The state representatives are limited by the number of support staff that have been allowed to sign required confidentiality agreements and this may limit these members’ access to certain regulatory expertise. The representative of state insurance regulators noted that he must rely solely on his limited department staff and a small group of staff from the National Association of Insurance Commissioners that have been detailed to his department and have signed the confidentiality agreement to support his FSOC activities, including committee representation. In a letter to the FSOC Chairperson, the National Association of Insurance Commissioners and the State Insurance Representative that is a member of FSOC, stated that the State Insurance Representative had been prohibited from discussing or seeking guidance from other relevant state regulators even on a confidential basis. Subsequent to this letter, FSOC issued an “operational interpretation” of the MOU on the treatment of nonpublic information. This interpretation states in part that the MOU does not prevent an FSOC member from consulting or discussing with anyone FSOC proposals, rules or other matters provided that the member does not (1) disclose specified types of confidential information; (2) attribute nonpublic proposals, rules or other matters to FSOC or any of its members or (3) disclose their views on such matters. Similarly, the representative of the state banking regulators is supported by four staff from the Conference of State Bank Supervisors and the representative of the state securities regulators by two staff from the North American Securities Administrators Association. The state banking member’s staff noted that Treasury had worked with the state members to secure their assistance, and that the State Banking Supervisor generally had adequate staff support. However, they did note that they think the process limited access to other state banking supervisors with specialized expertise. For example, they noted that they might want to consult with New York banking staff on international issues before FSOC, but confidentiality restrictions limit them to sharing information on FSOC matters only with the member, his state banking staff, and others who have signed a confidentiality agreement. In addition, the FSOC state insurance representative and his staff told us that because of the confidentiality restrictions, they had limited their discussions at International Association of Insurance Supervisors meetings, because they thought they could not speak to issues being discussed within FSOC. Agreeing on roles and responsibilities. As noted earlier, FSOC is tasked with monitoring the financial services marketplace to identify potential threats to U.S. financial stability, and OFR must develop and maintain metrics and reporting systems for risks to U.S. financial stability as well as monitor, investigate, and report on changes in system-wide risk levels. These responsibilities overlap somewhat, but this overlap is not unexpected given OFR’s primary mission of supporting FSOC. FSOC and OFR staff cited their statutory responsibilities for monitoring risks to U.S. financial stability as the reason that both organizations are pursuing efforts in this arena. FSOC and OFR staff also noted that OFR participates on the Systemic Risk Committee, allowing for some coordination of efforts. The Dodd-Frank Act defines certain responsibilities for FSOC and OFR, but, the lack of clear responsibility for implementation can lead to duplication, confusion, and gaps in their efforts. This risk is further compounded by the fact that many FSOC member agencies have risk analysis and data collection functions associated with their supervisory responsibilities. Some of these functions are explicitly focused on risks to financial stability, and some member agencies have created their own programs to examine these risks. For example, in 2010 the Federal Reserve created an Office of Financial Stability Policy and Research to identify and analyze potential threats to financial stability. FDIC, SEC, and the Federal Housing Finance Agency have also created offices in recent years to monitor risks to financial stability originating in their regulated markets. To the extent that these programs provide unique information to FSOC, they will be contributing to the overall effort. However, if not properly coordinated, these separate efforts could be duplicative, resulting in wasted time and resources. Establishing reinforcing or joint strategies. To achieve a common outcome, collaborating agencies need to establish strategies that work in concert with those of their partners or that are joint in nature. Such strategies help in aligning activities, core processes, and resources to reach a common outcome. In this area, FSOC has taken actions to better coordinate members’ rulemakings. In October 2010, it issued an integrated implementation road map for the Dodd-Frank Act that included a list of the rules regulators were required to promulgate, provided a time line for those rulemakings, and identified the agencies responsible for each rulemaking. FSOC has also developed a consultation framework for the agencies involved in rulemakings where consultation is required by the Dodd-Frank Act. The framework establishes time frames for coordinating three tasks: initial interagency meetings, circulation of term sheets for interagency comments, and circulation of proposed rules for interagency comments. In a November 2011 report, we noted that although FSOC’s road map and consultation framework were a positive development in facilitating coordination, they had limited usefulness. For example, the consultation framework does not provide, nor according to FSOC staff is it intended to provide, any specifics about staff responsibilities or processes to facilitate coordination. For example, it does not mention the extent to which interagency coordination is required or what happens when rulemakings conflict with or duplicate each other. As a result, we recommended that FSOC work with the federal financial regulatory agencies to establish formal coordination policies that would clarify issues such as the timing of coordination, the process for soliciting and addressing comments, and FSOC’s role in facilitating coordination. To date, FSOC has not implemented this recommendation. Industry representatives with whom we spoke also questioned why FSOC could not play a greater role in coordinating member agencies’ rulemaking efforts. As an example of how coordination could be improved, representatives noted that FSOC’s rule and interpretive guidance on designating nonbank financial companies for Federal Reserve supervision was finalized before the Federal Reserve had issued a rule laying out the requirements for determining whether a company would fall within the statutory definition of a financial company. FSOC and Federal Reserve staff said that after the timetable was set for the FSOC rule, the Federal Reserve decided that it needed to clarify some issues with its rulemaking, creating the anomaly of having a process for designating financial companies before the requirements for meeting the definition of a financial company had been adequately identified. The Secretary of the Treasury, FSOC’s Chairperson has noted in testimony before the Congress that he does not have the power to force FSOC members to collaborate on rulemakings. The Dodd-Frank Act requires FSOC to consult on a number of regulatory agency rulemakings, but it gave FSOC a few responsibilities that have led it to issue its own rules. These responsibilities include the authority to designate FMUs as systemically important and nonbank financial companies for supervision by the Federal Reserve under its enhanced prudential standards and to reevaluate the latter designations annually. While individual designations are not made by rule, in an effort to be more transparent, FSOC has issued rulemakings explaining the processes and criteria it will follow in making the individual designations. However, FSOC is not required to and has not developed a separate process to assess the overall impact of these designations including whether they are having the intended result of improving U.S. financial stability. The Dodd- Frank Act also mandated that FSOC issue a number of reports during its first two years, and FSOC has issued these by the mandated due dates. Most of these were one-time reports; however, FSOC is also mandated to report annually on a number of items, including potential emerging threats to financial stability. Both the 2011 and 2012 Annual Reports identify a number of threats, but they do not use a systematic forward-looking process for doing so. As a result, the reports may not be providing the public and Congress with the best information for guiding their decisions relative to these threats. The Dodd-Frank Act provided FSOC with the authority to designate FMUs as systemically important. FMUs are to be considered systemically important if FSOC determines that the failure of an FMU or a disruption in its functioning could threaten U.S. financial stability. Similarly, the Dodd- Frank Act provided FSOC with the authority to designate nonbank financial companies for supervision by the Federal Reserve under its enhanced prudential standards. The act stipulates that FSOC may designate these companies for Federal Reserve supervision if material financial distress at that company, or the nature, scope, size, scale, concentration, interconnectedness, or mix of the activities of the company, could pose a threat to U.S. financial stability. The Federal Reserve has not issued final rules on its enhanced prudential standards, but other final rules that will apply to designated nonbank financial companies have been issued. These rules include a rule on resolution plans or “living wills,” jointly issued by the Federal Reserve and FDIC that will require designated nonbank financial companies to prepare resolution plans and a rule, issued by the Treasury, which establishes an assessment schedule for the Financial Research Fund—the fund that finances OFR and FSOC under the Dodd-Frank Act. FSOC issued final rules on the processes FSOC intends to use for designating FMUs as systemically important and nonbank financial companies for Federal Reserve supervision in July 2011 and April 2012, respectively. In accordance with the Dodd-Frank Act, both rules specify that two-thirds of FSOC’s voting members, including the chairperson, must vote to designate FMUs and nonbank financial companies. Each rule, with any accompanying interpretive guidance, also outlines a multistage process that FSOC intends to follow in designating these entities, including a process for designated entities to request an FSOC hearing before the designation becomes final. In its 2012 Annual Report, FSOC reported that it had designated eight FMUs as systemically important. In contrast, FSOC has not yet designated any nonbank financial companies. In April 2012, FSOC also issued a rule implementing the Freedom of Information Act (FOIA). The Dodd-Frank Act states that FOIA, including its exceptions from disclosure, applies to any information submitted to FSOC or OFR under title 1 of the act. It further states that FSOC, OFR, and member agencies are to maintain the confidentiality of such information if that confidentiality is protected from public disclosure by federal or state law. While FOIA would apply with or without a rule, FSOC issued a rule setting out the procedures for requesting access to information contained in its records. According to Treasury officials and staff, FSOC’s rulemaking authority is narrow compared to that of the member agencies. The Dodd-Frank Act requires that those agencies issue a large number of rules, while it assigns few authorities to FSOC that may lead to rulemakings. Although FSOC does not have extensive rulemaking authority or written policies and procedures, its rulemakings followed a general process. For each rule, FSOC published a notice of proposed rulemaking in the Federal Register before issuing a final rule and included a time period for public comments (see table 1). Treasury staff noted that FSOC was not required to issue the various rulemakings but went through this process to provide greater public transparency of its processes. According to FSOC and Treasury officials, officials and staff from Treasury’s Office of General Counsel led the rule-drafting process, with officials and staff from Treasury’s Office of Domestic Finance, including the FSOC policy staff, and members’ staffs contributing significantly to the drafting of the designations rules. The processes relied on various groups and mechanisms to get feedback from officials and FSOC members’ staffs including standing committees and ad hoc working groups. The Deputies Committee was briefed regularly throughout the process, especially on issues that could not be resolved in other committees or working groups. Deputies kept their respective FSOC members informed throughout the rulemakings, and the members received all of the rule-making notices and final rules at least 48 hours before they were to be voted on at FSOC meetings. FSOC members voted unanimously to issue all of the rule- making notices and final rules before they were published in the Federal Register. Although the process for the rulemakings followed a general pattern, the number of notices, the time between the initial notice and the final rule, and the number of comments varied considerably across rules. Generally, these differences reflected differences in priorities and the potential impact of each rule. For example, FSOC and some member agency officials attributed the longer lapse between receiving comments on the notice for implementing FOIA and issuing the final rule to the relatively low priority attached to completing this rule. Officials told us that the rule remained a relatively low priority because FSOC had not yet begun gathering information under the FMU and nonbank financial company designation rules. In contrast, the long gap between the receipt of comments on the first notice of proposed rulemaking for the nonbank financial company rule and the issuance of the final rule reflects the complexities of developing a rule that encompasses a broad range of industry segments and the potential impact on them. During the gap between the first and second notices of proposed rulemaking for designating nonbank financial companies, FSOC and OFR staff developed information to support a set of thresholds for determining which nonbank financial companies would pass from the first stage of the designation process to the second. The thresholds, which were included in the interpretive guidance that accompanies the second notice, use publicly available information so that the first stage would be transparent. Financial companies have to meet a size threshold of $50 billion in assets and one of five other thresholds, including measures of leverage and debt. The thresholds generally reflect staff calculations. Staff generally calculated thresholds by placing the threshold at an interval above the mean that considers the dispersion of the data from the mean. Staff also tested to see whether certain companies that experienced material distress during the 2007-2009 financial crisis would have been captured by the threshold. The data used were generally either data on the 19 largest bank holding companies or nonbank financial companies in 2007 and 2008. FSOC is subject to laws and executive orders that require certain regulatory analyses as part of its rule-making processes. These include the Paperwork Reduction Act and the Regulatory Flexibility Act, as well as Executive Orders 12866 and 13563 (Executive Orders). Among other things, the Paperwork Reduction Act requires agencies to justify any collection of information from the public and to estimate the time and expense required to comply with the paperwork requirements in the rule. The Regulatory Flexibility Act requires federal agencies to assess the impact of their regulation on small entities and consider regulatory alternatives to lessen any regulatory burden. The Executive Orders require FSOC to assess the economic effects of economically significant rules, including the quantitative and qualitative benefits and costs of those regulations. However, FSOC was required to consider costs and benefits only as they relate to the Paperwork Reduction Act for the FMU and nonbank financial company rulemakings. As a result, these rules contain estimates of the time needed to comply with paperwork requirements. FSOC estimates the annual reporting burden for the FMU rule at 500 hours and for the nonbank financial company rule at 1,000 hours. The FMU rule does not include an estimate of the cost of the projected hours; the cost for the hours imposed by the nonbank financial company rule is estimated at $450,000 a year. FSOC concluded that small entities were unlikely to be designated as posing a risk to U.S. financial stability, and thus an analysis of the rules designating FMU and nonbank financial companies’ impact on small entities was not required. In addition, FSOC did not conduct a benefit-cost analysis for the rules designating FMUs or nonbank financial companies because the Office of Management and Budget determined that these rules were not economically significant. Treasury officials noted that the rule did not impose substantive requirements on specific entities, but only laid out the process by which they could become subject to other rules and regulations. In addition, FSOC member staff noted that costs and benefits of the designation were not among the factors that the Dodd- Frank Act directs FSOC to consider when making a designation. Designating FMUs and nonbank financial companies was intended to address certain risks to financial stability posed by these entities. The designations, however, have the potential to confer certain benefits and costs on the wider economy and individual entities being designated. Examples of potential benefits and costs of subjecting FMUs and nonbank financial companies to heightened supervision include the following: Economy-wide benefits. The Dodd-Frank Act provides FSOC with the authority to designate nonbank financial companies because Congress believed that these companies could threaten U.S. financial stability. Subjecting companies to enhanced supervision may contribute to financial stability. Individual benefits. Some research has shown that certain large, interconnected financial institutions considered too big to fail may have higher credit rating agency ratings and lower borrowing costs than would otherwise be warranted. As a result, designated nonbank financial companies that are not already treated as too big to fail by rating agencies or markets could see their borrowing costs fall. Economy-wide costs. Industry representatives have noted that regulations such as minimum capital requirements that may be imposed on designated entities have the potential to reduce the availability of credit and slow economic growth. Individual costs. Some of those who commented on the nonbank financial company rule noted that being designated would impose a significant regulatory burden on the designated companies. Designated nonbank financial companies will be subject to supervision similar to that for large bank holding companies, required to prepare resolution plans, and assessed fees to fund the operation of OFR and FSOC. However, the impact of these Dodd-Frank Act provisions on designated nonbank financial companies will not be known until the rules are applied. Designated FMUs could also experience increased costs. The Dodd-Frank Act requires FSOC to rescind any designation if the institution no longer meets the FMU or nonbank financial company standards and specifically requires FSOC to reevaluate nonbank financial company designations at least annually. The final rule for designating certain nonbank financial companies for enhanced supervision says FSOC will notify a nonbank financial company prior to an annual reevaluation and provide the company up for review with an opportunity to submit written materials to contest the designation. However, the rule also notes that reevaluations will focus on material changes since a previous review rather than a full replication of the original designation process. In the interpretive guidance for the nonbank financial company designations, FSOC says that it also intends to review the appropriateness of both the stage one thresholds and the levels of the thresholds that are specified in dollars as needed, but at least every five years, and to adjust the thresholds and levels as it may deem advisable. However, FSOC has not set up processes to conduct a comprehensive assessment of the overall impact of designations. Doing a comprehensive analysis to assess whether designations are having their intended impact of providing greater financial stability and the extent of any other impacts will be challenging. Namely, establishing a baseline from which to evaluate the overall impact of various rules will likely be complex because the impact of being designated will depend on the application of a number of rules being written by multiple independent regulatory agencies and issued over a span of time. the CFTC, Federal Reserve, and SEC are writing will help determine the impact of being designated a systemically important FMU. Similarly, the impact of being designated a nonbank financial company will be influenced by the rule the Federal Reserve is writing, to implement enhanced prudential standards; the Federal Reserve and FDIC rule on resolution planning; and Treasury’s rule on assessments to fund FSOC and OFR. Moreover, not all of these agencies are required to conduct cost-benefit analyses that might be useful in establishing a baseline for ongoing evaluation. For example, neither the Federal Reserve nor FDIC are subject to the Executive Orders that require an economic analysis of the costs and benefits of certain rules. Furthermore, while some regulatory agencies may conduct periodic retrospective reviews of their rules, these reviews tend to focus only on the rules issued by their agency. FSOC is uniquely positioned to address this challenge. FSOC is responsible for designating FMUs and nonbank financial companies, and its member agencies are responsible for writing the rules that will impact these designated entities. Moreover, FSOC can rely on OFR for some data collection and analysis. However, FSOC members would need to collaborate on such an assessment, because FSOC policy and OFR staff, who are Treasury employees, may not have access to all of the needed information. In addition, collaboration is needed because, according to Treasury officials, it would be inappropriate for FSOC staff to review rules drafted by independent agencies unless those agencies agreed to participate in the comprehensive assessment. Without such an assessment, decision makers may not have the information they will need to determine whether designating new entities for enhanced supervision and other requirements and restrictions is addressing a perceived gap in the regulatory system and improving the stability of the financial system or whether policy changes should be considered. As table 2 shows, the Dodd-Frank Act mandated that FSOC issue a number of reports, including five one-time studies and ongoing annual reports. Although some of the timelines were short—two of the studies had to be issued in 6 months—and the subject matter difficult, FSOC met all of its mandated report timelines and generally strove to address the specific items in the mandate. For example, the mandated studies generally began with a discussion of the mandate itself and the extent to which the report could address certain questions. The 2011 Annual Report also generally addressed the subjects in the mandate, including identifying emerging threats to financial stability of the nation and recommendations to enhance the integrity, efficiency, competitiveness, and stability of U.S. financial markets; promote market discipline; and maintain investor confidence. The processes that FSOC used to issue all of the reports were generally similar to those used for rulemakings. Treasury officials led activities related to issuing the studies, except that the Federal Reserve staff led activities for and was the author of the study on concentration limits. For the annual reports, FSOC brought on detailees from Federal Reserve District Banks to lead the process. For all of the reports, the process relied on ad hoc working groups of member staff to provide input. FSOC also relied on the Deputies Committee to help manage the process and keep FSOC members informed of key decisions. The members voted unanimously to issue all of the reports. FSOC’s annual reporting process is an ongoing responsibility which, in the absence of a strategic plan, functions as its major strategic planning document and method for communicating with Congress and the public, especially regarding potential emerging threats to U.S. financial stability. FSOC’s early annual reports provide extensive information about the current economy and complex issues, such as high-frequency trading and the MF Global bankruptcy. In addition, the reports provide extensive discussions of current known threats such as those associated with money market funds and the European sovereign-debt crisis and makes some recommendations to address them. However, FSOC has not developed a structure that supports having a systematic or comprehensive process for identifying potential emerging threats. The process for identifying these threats is similar, in some ways, to that used by the Systemic Risk Committee. Members’ staffs, including some members of the Systemic Risk Committee, identify specific threats for consideration. As a result, new threats that members or staff have not already identified may not be included. In addition, the lack of a systematic process for identifying potential emerging threats leads to potential inconsistencies in identifying such threats. For instance, certain potential threats related to U.S. debt, are not in the 2011 Annual Report. Instead this report has a conflicting message on the danger the U.S. debt poses to financial stability. The project leader of the 2011 Annual Report said that the report did not include the U.S. debt as an emerging threat because of issues of balance and the inappropriateness of FSOC speculating on the credit risk associated with U.S. Treasury securities. However, the 2012 Annual Report identifies the U.S. debt as a potential threat, but does not explain what has changed since the 2011 report. Similarly, the 2011 report includes several threats associated with possible unintended consequences of new regulations being written to implement the Dodd-Frank Act, but the 2012 report does not include these threats. The 2012 report does include a framework for identifying potential emerging threats, but, this framework, which separates threats into shocks to the system and vulnerabilities in the system that would exacerbate shocks, is not equivalent to the kind of systematic analysis that would help determine both the likelihood of a threat and its likely severity. Without a systematic process that consistently identifies threats Congress and the public might believe that a threat has grown in importance or been addressed when that is not the case. Similarly, neither the 2011 or 2012 annual reports use a systematic forward-looking approach to identify potentially emerging threats. As a result, they comingle threats that emerged during the 2007-2009 crisis, current threats, and potentially emerging threats. Although the 2012 report notes that structural vulnerabilities that contributed to the 2007- 2009 financial crisis associated with mortgage-backed securities backed by subprime mortgage debt built up over an 8-year period, the report does not use a systematic mechanism for identifying similar kinds of asset build-ups or other market changes that might signal a potential emerging threat. Rather the report often identifies risks, such as those associated with the European sovereign-debt crisis or money market funds, which are ongoing or have previously been identified, although it acknowledges that these events may change in the future in ways that are not currently known. The 2012 report does include at least one area that could be considered potentially emerging—threats associated with having a low interest rate environment. Specifically it notes threats associated with market participants taking on more risk to increase their earnings but says it does not see evidence of this now. Threats that emerged during the crisis or those that are currently evident likely require different and perhaps more immediate responses than those that are potentially emerging. The comingling of well-known risks with risks that are developing, but less well-known, reduces the ability of policymakers and market participants to develop effective and timely responses for the latter. Further, the FSOC process for identifying threats limits its ability to explicitly prioritize the large number of threats identified. The 2011 report includes over 30 threats without explicitly specifying which are most important. The 2012 report also includes a lengthy list of threats without explicit prioritization. In contrast, other entities, such as the International Monetary Fund and European Central Bank, issue reports that explicitly prioritize potentially significant threats. Treasury and FSOC officials and staff noted that FSOC’s Annual Reports have a different purpose and implicitly prioritize the threats in the recommendations sections of the reports. For example, they noted that the recommendation for money market funds—a threat included in the potentially emerging section— notes that the vulnerabilities associated with money market funds are a particular focus of FSOC. They contrasted this recommendation with other recommendations, such as those related to the low interest rate environment that notes that regulators and industry should adopt certain practices to help monitor the situation. However, the lack of a systematic process that explicitly prioritizes potential emerging threats leaves policymakers without the information they need to focus on or allocate scarce resources to the most important threats. The 2007-2009 financial crisis highlighted how the nation’s fragmented regulatory structure was not equipped to monitor and address risks across the financial system, nor did it have the needed information to facilitate that oversight. To address this weakness, Congress created FSOC and OFR to improve the U.S. government’s ability to identify and respond to future threats to financial stability. This is a daunting task, and one that is made more challenging as FSOC and OFR must concurrently stand-up organizations and establish a sense of collective accountability among the independent regulators and other members. Successfully implementing their mandates will require FSOC members to actively work together and with external stakeholders. Appropriate accountability and transparency mechanisms also need to be established to determine whether FSOC and OFR are effective and to ensure that the public and Congress have sufficient information to hold the entities accountable for results. Over the last 2 years, FSOC and OFR have made progress on these fronts. Staff from FSOC member agencies told us that the level of collaboration and communication among the agencies has increased since the creation of FSOC and that such collaboration has resulted in more information sharing and diverse perspectives being considered. OFR has also made contributions to international efforts, such as coordinating U.S. input on the Legal Entity Identifier, to enhance governments’ abilities to track financial activity. While FSOC and OFR have made some progress, continued efforts to improve the entities’ accountability, transparency, and collaboration are needed. As we have seen, for example: OFR issued a strategic framework in March 2012 that covered the period fiscal years 2012-2014. This represented an important step for the new agency in adopting leading practices in performance management. The framework identifies OFR’s strategic goals, highlights a number of objectives under those goals, and lays out implementation priorities for the first year covered by the document. However, as OFR acknowledges, the framework does not include key elements of a performance management system, such as linking programmatic, human resources, and budgetary decision making to its strategic goals and developing performance measures. OFR expects to communicate progress on these key elements when it provides a new human resources plan to Congress in September 2012 and in its fiscal year 2014 budget submission. Moving forward, transforming its framework into a comprehensive strategic planning and performance management system can provide the agency with a long-term vision and allow others to hold it accountable which will be critical for OFR. The critical role of monitoring threats to financial stability and responding to emerging threats also needs to be further developed. Potential threats to financial stability are discussed at FSOC meetings and FSOC has established a Systemic Risk Committee to facilitate coordination among members’ staffs, including member agencies that often have their own groups devoted to risk analysis. In addition, OFR is evaluating a variety of potential tools for assessing financial stability and studying methods to improve stress tests. Collectively, these efforts remain incomplete. The approach of the Systemic Risk Committee can help FSOC analyze known risks but does not take full advantage of FSOC member agency resources to identify new threats to the financial system. Without more systematic and comprehensive mechanisms, including comprehensive sharing of key financial risk indicators, risks to financial stability may develop in the system without being recognized. FSOC and OFR have attempted to be transparent with some of their decision making and activities. FSOC, for instance, posts the minutes from its meetings and other key documents on Treasury’s website, and it provided insight into its designations processes through multiple rulemakings and comment periods as well as by providing additional information on the designation of FMUs in its 2012 Annual Report. OFR also posted some information on Treasury’s website and provided information on a wide range of OFR activities and research in its first annual report issued in July 2012. As we found, however, both FSOC and OFR could be more transparent. For example, FSOC’s minutes contain limited details about the council’s discussion and the amount of detail included in the minutes has declined over time. While some information discussed must remain confidential given potential market sensitivities, legal restrictions on sharing certain information, and the need for members to deliberate, striving to be as transparent as possible given the potential impact of some of its decisions on institutions and markets is important for FSOC. FSOC’s and OFR’s limited transparency has caused some former government officials, industry representatives, and academics to question whether they are making progress. Continued efforts to increase transparency will allow the public and Congress to better understand FSOC’s and OFR’s decision making, activities, and progress. FSOC and OFR have taken some steps to encourage collaboration, such as FSOC setting up standing committees composed of members’ staffs and OFR beginning to establish a professional and technical advisory committee. However, more needs to be done to promote collaboration—both among FSOC members and between FSOC and external stakeholders. For example, FSOC has not yet set up advisory committees, and OFR and FSOC have not yet clarified their responsibility for implementing statutory requirements for monitoring and reporting on threats to U.S. financial stability, including the responsibilities of member agencies. More fully incorporating the key practices for successful collaboration, including agreeing on roles and responsibilities and establishing reinforcing or joint strategies, could make FSOC’s and OFR’s existing collaboration efforts more effective. Effective collaboration could eliminate unnecessary duplication for both the industry and regulators. In addition, it could help to fill regulatory gaps so that risks would not migrate to unregulated markets or countries as they did prior to the 2007-2009 financial crisis. One of FSOC’s most significant actions, to date, has been finalizing its rules for designating FMUs and nonbank financial companies for additional oversight. The Congress intended that the enhanced supervision of those entities designated would lead to greater financial stability. In addition, the designations will likely have other important ramifications for the designated entities—which will become subject to a number of other rules and regulations—and potentially the nation’s economy. While FSOC must periodically reevaluate the nonbank financial company designations and intends to review the thresholds for stage one of the nonbank financial company process at least every 5 years, it is not required to conduct a comprehensive assessment to determine whether the designations are having their intended impact of improved financial stability as well as other consequences. Establishing a baseline and developing a framework to comprehensively assess the impact of the designations will be difficult because of the number of independent regulators involved. But, without such an analysis, Congress, the affected institutions, the public, and FSOC cannot determine whether the designations and associated oversight is actually helping to improve financial stability. While FSOC’s annual reports identify a number of potential emerging threats to the nation’s financial stability, they do not use a systematic forward-looking approach to identify such threats. Thus, some threats may not be identified consistently or at all. Threats such as those associated with the long term U.S. debt appear in FSOC’s 2012 Annual Report but had not appeared in FSOC’s 2011 Annual Report and there was no explanation for the change. The reports are also not forward-looking in that many of the identified threats, such as those associated with money market funds or the European debt crisis, will not potentially emerge but rather emerged during the 2007-2009 financial crisis or more recently. Finally, the reports do not explicitly prioritize the emerging threats relying instead on a careful reading of the recommendations to determine which are critical. In addition, these recommendations do not consistently identify which member agency or agencies are recommended for implementing or monitoring the council’s recommendations. The lack of this information makes determining which well-recognized threats require immediate action, which potential emerging threats are most likely to have severe outcomes, and how best for decision makers to address the differing threats. Finally, it does not allow Congress to hold FSOC accountable for identifying potential emerging threats or implementing the recommendations. Whether FSOC and OFR fundamentally change the way the federal government monitors threats to financial stability remains an open question. This is due, in part, to the newness of the entities, as both continue to develop needed management structures. But limits to FSOC’s and OFR’s transparency also contribute to questions about their effectiveness. Addressing the issues we have identified will help FSOC and OFR shed more light on their decision making and activities and allow Congress to hold them accountable for results. Moreover, addressing these issues can help FSOC and OFR to further promote collaboration among FSOC’s members and with external stakeholders, which is critical to their ability to achieve their missions. If they do not succeed in achieving their missions the financial system will remain vulnerable to large or multiple shocks that could result in large losses in asset values, higher unemployment, and slower economic growth associated with previous financial crises. While FSOC and OFR have made progress in establishing their operations and approaches for monitoring threats to financial stability, developing accountability and transparency mechanisms, and enhancing collaboration among the financial regulatory agencies, these efforts could be strengthened. Therefore, we are recommending that the Secretary of the Treasury take 10 actions—some in his capacity as the Chairperson of FSOC, in consultation with FSOC members, and in his leadership role for OFR, which does not yet have a confirmed director. We recommend that FSOC and OFR clarify responsibility for implementing requirements to monitor threats to financial stability across FSOC and OFR, including FSOC members and member agencies, to better ensure that the monitoring and analysis of the financial system are comprehensive and not unnecessarily duplicative. As FSOC continues to develop approaches for monitoring threats to financial stability, we recommend that FSOC develop an approach that includes systematic sharing of key financial risk indicators across FSOC members and member agencies to assist in identifying potential threats for further monitoring or analysis. To improve the transparency of FSOC and OFR operations, we recommend that FSOC and OFR each develop a communication strategy to improve communications with the public. The strategy could include using technology more effectively to communicate, such as having fully developed websites, sending regular notices to interested parties, and developing methods to communicate with the public. To support the growth of OFR into a viable and sustainable entity, we recommend that OFR build on its strategic framework by further developing its strategic planning and performance management system so that it links its activities to its goals and uses publicly available performance measures to measure its progress. To strengthen accountability and collaboration in FSOC’s decision making, we recommend that FSOC take the following six actions: Keep detailed records (for example, detailed minutes or transcripts) of closed door sessions of principals meetings and to the extent possible make them publicly available after an amount of time has passed sufficient to avoid the release of market-sensitive information or information that would limit deliberations. Establish formal collaboration and coordination policies that clarify issues such as when collaboration or coordination should occur and what role FSOC should play in facilitating that coordination. More fully incorporate key practices for successful collaboration that we have previously identified. Internally, this could include working with agencies to rationalize schedules for rulemakings and conducting collaborative system-wide stress testing. Externally, this could include using professional and technical advisors including state regulators, industry experts, and academics. Establish a collaborative and comprehensive framework for assessing the impact of its decisions for designating FMUs and nonbank financial companies on the wider economy and those entities. This framework should include assessing the effects of subjecting designated FMUs and nonbank financial companies to new regulatory standards, requirements, and restrictions; establishing a baseline from which to measure the effects; and documenting the approach. Develop more systematic forward-looking approaches for reporting on potential emerging threats to financial stability in annual reports. Such an approach should provide methodological insight into why certain threats to financial stability are included or excluded over time, separate current or past threats from those that are potentially emerging, and prioritize the latter. Make recommendations in the annual report more specific by identifying which FSOC member agency or agencies, as appropriate, are recommended to monitor or implement such actions within specified time frames. We provided a draft of this report to the Secretary of the Treasury—as the Chairperson of FSOC and in his leadership role for OFR—for review and comment. Treasury’s Under Secretary for Domestic Finance, on behalf of the Chairperson of FSOC, provided written comments, which are reprinted in appendix III. Treasury also provided technical comments on the draft report, which we incorporated as appropriate. Treasury solicited views from staff of the FSOC members and member agencies on the draft report and reflected these views in the comments provided to us. In their written comments, Treasury emphasizes the progress that FSOC and OFR have made since their creation. For example, Treasury highlights FSOC’s work in issuing a final rule and guidance relating to the designation of nonbank financial companies for enhanced supervision and designating eight systemically important FMUs that will be subject to enhanced risk management standards. Treasury also highlights OFR’s progress in building its organization and analytical capabilities, including the launch of OFR’s working paper and seminar series for research on financial stability and risk management. Treasury also outlines efforts FSOC and OFR have taken to promote transparency and accountability, including testifying before Congress, responding to requests for information from oversight bodies, conducting voluntary rulemakings, or making information available on websites. In addition, Treasury also emphasizes that the progress that each entity has made to date should be viewed with the understanding that both entities are relatively new. Nevertheless, Treasury recognizes that more work remains. In the report, we also describe FSOC’s and OFR’s efforts, to date, in fulfilling their statutory responsibilities and efforts to promote accountability and transparency. The report also notes that both entities were established in 2010. In its letter, Treasury states that officials will carefully consider the report’s findings and recommendations. Treasury further notes that the Secretary, in his role as Chairperson, will share the recommendations with the Council for their review and consideration. Treasury also offers initial reactions to several of our recommendations. First, regarding our recommendation that FSOC and OFR should clarify their responsibilities for monitoring threats to financial stability, Treasury states that there is no existing confusion or overlap of responsibilities. Furthermore, Treasury states that both organizations are working together to pursue their distinct, but complementary, statutory missions, and cites OFR’s efforts to develop the Financial Stability Monitor, a collection of indicators related to financial stability that Treasury expects will be shared with FSOC members. In the report, we point out that Congress gave both FSOC and OFR responsibilities for monitoring systemic risk—responsibilities that both entities must fulfill. We also highlight that multiple FSOC members, such as the Federal Reserve, also have ongoing efforts to monitor threats to financial stability. The report does not suggest that any overlap between these efforts currently exist. Rather, the report recommends that these similar statutory responsibilities and ongoing efforts should be clarified and carefully coordinated. While Treasury notes that no confusion or overlap currently exists, our past work has shown that without clearly delineating and coordinating roles and responsibilities there can be duplication of efforts, confusion, and regulatory gaps. In addition, the report notes the importance of a systematic and comprehensive approach to identifying threats to financial stability. While OFR’s Financial Stability Monitor could be a vehicle for sharing key financial risk indicators, it does not yet reflect a comprehensive interagency effort to collect and share indicators related to financial stability. Second, Treasury states that it expects FSOC will consider the effects on the financial system resulting from designation in its periodic assessments in response to our recommendation that FSOC develop a framework for assessing the impact of its decisions for designating FMUs and nonbank financial companies on the wider economy and those entities. The Dodd-Frank Act requires FSOC to periodically review the designation of nonbank financial companies, and FSOC intends to periodically review FMU designations. However, FSOC is not currently required to examine the potential economic impact of the designations. In our report, we detail the types of benefits and costs that individually designated firms, as well as the economy at large, may experience as a result of the designations. Given the potential magnitude of these benefits and costs of the designations, a comprehensive assessment of their impact is warranted. Third, Treasury agrees that OFR should implement a robust strategic planning and performance management system as the office grows. Treasury describes OFR’s past and ongoing efforts on this front, highlighting, for example, OFR’s March 2012 strategic framework. In the report, we note that the strategic framework was an important first step for OFR, and we also describe OFR’s efforts to develop an independent strategic planning and performance system, including performance measures, and time lines for publicly releasing this information. We acknowledge OFR’s efforts in continuing to develop the elements required for a strategic plan and performance management system and will review this information when it is publicly released. Finally, Treasury notes our recommendation regarding OFR’s communications strategy is consistent with the office’s ongoing efforts. The letter describes OFR’s efforts to improve communication and notes that FSOC’s website is being redesigned to improve usability and navigation. The report also recognizes OFR’s recent efforts to improve their communication methods, such as the recent capability OFR added to its website enabling the public to sign up for email alerts on recent OFR activities. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees and members, the Secretary of the Treasury, and other members of FSOC. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact A. Nicole Clowers at (202) 512-8678 or clowersa@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objectives of this report are to examine the Financial Stability Oversight Council’s (FSOC) and Office of Financial Research’s (OFR) (1) challenges in fulfilling their missions; (2) efforts in establishing management structures and mechanisms to carry out their missions and attain their goals; and (3) activities for supporting collaboration among members and external stakeholders, including international bodies and regulators; as well as (4) FSOC’s processes used to issue rules and reports. To identify and examine any challenges faced by FSOC and OFR, we reviewed our prior reports on regulatory reform and the financial crisis. We also reviewed statements of government officials, members of Congress, and academic experts. In addition, we interviewed FSOC policy staff and support staff of FSOC members, including staff and officials at member regulatory agencies. At OFR, we interviewed senior officials and some staff members. To examine FSOC’s and OFR’s efforts in establishing management structures and mechanisms to carry out their missions, we reviewed the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd- Frank Act), FSOC’s bylaws and organizational structure (including its committee structure), and OFR’s strategic framework. We reviewed minutes from FSOC’s principals’ meetings (i.e., the meetings with the designated FSOC members, many of whom are heads of regulatory agencies), and we reviewed the minutes from the Federal Reserve’s Federal Open Markets Committee and the United Kingdom Interim Policy Committee meetings for comparison purposes. We also examined the entities’ fiscal years 2012 and 2013 budget requests, budgetary and staffing data, and congressional testimonies. We assessed the reliability of OFR’s staffing data by comparing the data provided with information contained in the President’s budget for fiscal year 2013 and FSOC testimony in April 2011. We also tested the data for consistency over time. We determined the data were sufficiently reliable for our purposes. To understand the steps OFR is taking to protect the sensitive data it collects, we interviewed OFR officials with knowledge of data security efforts, reviewed the OFR strategic framework, and reviewed Congressional testimony provided by OFR. In addition, we reviewed the Department of the Treasury Inspector General reports on the stand-up of OFR and the Consumer Financial Protection Bureau. We used criteria from Standards for Internal Control in the Federal Government, our past work on the stand-up of federal entities, such as the Millennium Challenge Corporation, and managing for results to evaluate FSOC and OFR management structures and mechanisms, including their need for strategic planning and performance measures. We reviewed selected academic literature on tools used or proposed to identify potential threats to financial stability by entities that write financial stability reports, including the European Central Bank, Bank of England, and International Monetary Fund, and others, including the Institute for International Finance and Pew Financial Reform Project. We also attended FSOC’s and OFR’s December 2011 conference entitled Macroprudential Toolkit: Measurement and Analysis. We interviewed some FSOC members; officials from FSOC federal regulatory agencies whose heads are members of FSOC (member agencies); support staff of other FSOC members; FSOC, OFR, and other Treasury officials and staff; and academics who have published research related to systemic risk and worked on financial stability reports. We also coordinated with the Treasury Office of Inspector General and the Council of Inspectors General on Financial Oversight regarding their ongoing audits of FSOC and OFR. To determine how FSOC and OFR support collaboration, we reviewed our criteria for effective collaboration and compared it to FSOC and OFR policies and practices. We analyzed the Dodd-Frank Act requirements for collaboration, FSOC’s transparency policy, hearing procedures, Dodd- Frank Act integrated implementation roadmap, and memorandum of understanding on information sharing. We also reviewed FSOC’s products, such as the 2011 Annual Report, and interviewed FSOC staff and officials from FSOC member agencies and some FSOC member support staff to determine how FSOC members collaborate, including how they participate in the drafting of the products. To determine how collaboration practices among and between domestic and international financial regulators have changed since the creation of FSOC, we reviewed FSOC, OFR, and some member agencies’ congressional testimony and reports from international bodies, such as the Financial Stability Board. Finally, we interviewed representatives from industry trade groups, government officials in the United Kingdom, and experts from the European Union. We selected individuals from the United Kingdom and European Union because those entities have experience working with U.S. federal financial regulators and councils designed to enhance financial stability. To examine the process and procedures FSOC and OFR used in issuing products, we analyzed FSOC rules or reports issued before July 2012 including FSOC’s 2011 and 2012 Annual Reports. We analyzed the content of public comments on three proposed rules to examine how FSOC addressed them in the final rules. We also reviewed documentation on the process FSOC staff used to document and address comments from member agencies on the 2011 Annual Report. In addition, we examined the analyses and other materials provided to FSOC members prior to their meetings, including material to be presented at the meetings dating from October 2010 through December 2011 and selected documents thereafter through March 2012. We interviewed officials from FSOC member agencies, FSOC and OFR policy staff who had responsibility for contributing to the products within our scope, academics who have published research related to systemic risk, and industry or trade groups who submitted comments on a number of FSOC rules and reports. Finally, using testimonial and documentary evidence, we compared FSOC’s rulemaking process, rules, and reports with key practices identified in our prior work on rulemaking and standard economic practice, where applicable. We conducted this performance audit from November 2011 to September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. FSOC has established seven standing committees. The committees and brief descriptions are listed below. Deputies Committee: Coordinates and oversees the work of the interagency staff committees. The members of the Deputies Committee are senior officials from each of the member agencies. Treasury chairs this committee. Systemic Risk Committee: Includes senior staff and reports to the Deputies Committee. The committee is accountable for systemic risk monitoring and will play a role in prioritizing the review of sources of systemic risk and guiding the work of staff and the systemic risk subcommittees. Treasury chairs this committee. It has two subcommittees. Institutions Subcommittee: Focuses on identifying and analyzing issues that affect financial institutions in the medium and longer term. It also attempts to identify structural issues within financial institutions that could threaten financial stability, such as trends in leverage or funding structure, new products, or exposures to particular risks. The Board of Governors of the Federal Reserve (Federal Reserve) and the Federal Deposit Insurance Corporation (FDIC) chair this subcommittee. Markets Subcommittee: This subcommittee focuses on identifying and analyzing issues that affect financial markets in the medium and longer term, including structural issues within financial markets that could threaten financial stability, such as trends in volatility or liquidity, market structure, or asset valuations. The Commodity Futures Trading Commission (CFTC) and Securities and Exchange Commission (SEC) chair this committee. Designation of Nonbank Financial Companies Committee: Supports FSOC in considering, making, and reviewing designations of nonbank financial companies to be supervised by the Federal Reserve. The Federal Reserve and FDIC chair this committee. Designation of Financial Market Utilities Committee: Supports FSOC in considering, making, and reviewing designations of financial market utilities and payment, clearing, and settlement activities. The Federal Reserve, SEC, and CFTC chair this committee. Heightened Prudential Standards Committee: Supports FSOC in making recommendations for heightened prudential standards with respect to designated nonbank financial companies and large, interconnected bank holding companies, and with respect to other financial activities and practices that could impact financial stability. This committee also supports FSOC’s authorities for monitoring regulatory developments, facilitating information sharing, recommending supervisory priorities and principles, and identifying gaps in regulation that could pose risks. The Federal Reserve and the Office of the Comptroller of the Currency chair this committee. Orderly Liquidation Committee: Supports any FSOC recommendations on resolution plan requirements, consideration of filed resolution plans, and consideration of FDIC and Federal Reserve proposed orders to require divestiture; and consults with FSOC on rulemakings to implement the Title II orderly liquidation authority. FDIC and Treasury chair this committee. Data Committee: Supports FSOC coordination of, and consultation on, agency rulemakings on data collection, and seeks to minimize duplication of data gathering operations. The committee supports a coordinated approach to information sharing and provides direction to, and requests data from, the Office of Financial Research (OFR). Additionally, the committee works with OFR on data standardization efforts. Treasury chairs this committee. In addition to the contact named above, Kay Kuhlman (Assistant Director), Holland Avery, Nancy Barry, Emily Chalmers, Rudy Chatlos, Rachel DeMarcus, Christopher Forys, Michael Hoffman, Marc Molino, Susan Sawtelle, Rachel Siegel, and Henry Wray made significant contributions to this report. Other assistance was provided by Janet Eackloff and David Martin. | In 2010, the Dodd-Frank Wall Street Reform and Consumer Protection Act created FSOC to identify and address threats to the stability of the U.S. financial system and OFR to support FSOC and Congress by providing financial research and data. GAO was asked to examine (1) any challenges FSOC and OFR face in fulfilling their missions (2) FSOC and OFR's efforts to establish management structures and mechanisms to carry out their missions, (3) FSOC and OFR's activities for supporting collaboration among their members and external stakeholders, and (4) the processes FSOC used to issue rules and reports. GAO reviewed FSOC documents related to the annual reports, rulemakings, and committee procedures, as well as documents on OFR's budget, staffing, and strategic planning. GAO also interviewed FSOC and OFR staff, FSOC member and member agency staff, and external stakeholders, including foreign officials, industry trade groups, and academics. These new organizations--the Financial Stability Oversight Council (FSOC) and Office of Financial Research (OFR)--face challenges in achieving their missions. Key FSOC missions--to identify risks and respond to emerging threats to financial stability--are inherently challenging, in part, because risks to financial stability do not develop in precisely the same way in successive crises. Collaboration among FSOC members can also be challenging at times, as almost all of them represent independent agencies that retained existing authorities. OFR faces the challenge of trying to establish and build a world-class research organization while meeting shorter-term goals and responsibilities. FSOC's and OFR's management mechanisms to carry out their missions could be enhanced to provide greater accountability and transparency. FSOC and OFR have taken steps toward establishing such mechanisms. FSOC has established seven standing committees generally composed of staff of its members and member agencies to support the council in carrying out its business and provide information to the council for decision making and adopted a memorandum of understanding on information sharing to help govern its activities. FSOC and OFR have also issued annual reports on their activities and created web pages that provide some information to the public. However, certain mechanisms could be strengthened. For instance: FSOC's Systemic Risk Committee, which is responsible for identifying risks to financial stability, has procedures to facilitate analysis of risks raised by staff. However, without a more systematic approach and comprehensive information, FSOC member agencies, on their own, may not be well positioned to judge which potential threats will benefit from interagency discussions. GAO recommends that FSOC collect and share key financial risk indicators as part of a systematic approach to help identify potential threats to financial stability. Public information on FSOC's and OFR's decision making and activities is limited, which makes assessing their progress in carrying out their missions difficult. GAO recommends that (1) FSOC keep detailed records of closeddoor sessions and (2) both entities develop a communication strategy to improve communications with the public. FSOC's annual reports--which serve as its key accountability documents--do not consistently identify which entities should monitor or implement the identified recommendations or give time frames for specific actions. To hold FSOC accountable for its recommendations, GAO recommends that FSOC recommend a lead agency or agencies to monitor or implement each recommendation within specified time frames. OFR issued a strategic framework in March 2012 as an important first step in adopting a strategic planning and performance management system. However, that document lacked some leading practices such as linking activities to strategic goals and performance measurement systems. GAO recommends that OFR further develop a strategic planning and performance management system that includes these elements and will allow it to be held accountable. Although FSOC and OFR have taken steps to promote collaboration among FSOC members and external stakeholders, FSOC could further adopt key practices. FSOC member agency staff noted that agencies have leveraged their joint expertise and resources to produce FSOCs mandated reports and rules. OFR has also taken steps to collaborate with external stakeholders by initiating a working paper series, moving to form an advisory committee, and coordinating U.S. efforts at the international level to help create a legal entity identifier for financial entities that could enable regulators to identify parties to financial transactions. However, FSOC could do more to promote collaboration. For instance, FSOC, and OFR are required to monitor risks to financial stability, but they have not yet clarified agency responsibilities for implementationcreating the potential for regulatory gaps or duplication of effort. In addition, FSOC could take better advantage of statutory mechanisms to leverage external resources, including developing advisory committees. To improve collaboration and coordination among its member agencies and with external stakeholders, GAO recommends that FSOC (1) develop policies to clarify when formal collaboration or coordination should occur and FSOCs role in such efforts, (2) more fully incorporate key practices for successful collaboration that GAO has previously identified, and (3) clarify roles and responsibilities for implementing requirements to monitor risks to the financial system. FSOC has issued rules that improve the transparency of its processes, and statutorily mandated reports but has not established processes to help ensure that these will have their intended results. While FSOC has issued rules on processes for designating nonbank financial entities for additional oversight and intends to review certain aspects of those rules, it has not developed plans for comprehensively evaluating whether designations are having their intended impactreducing threats to financial stability. The impact of the designations on the economy and the financial entities will depend, in part, on a number of rules being issued by independent FSOC member agencies that will be applied to those being designated. Without a comprehensive assessment of the impact of these rules that will require the cooperation of individual FSOC members, understanding whether the designations are having their intended impact will be difficult. GAO recommends that FSOC develop a comprehensive framework for assessing the impact of its designation decisions. In addition, FSOC has not developed a systematic forward-looking process for identifying potential emerging threats in its mandated annual reporting process. In particular, FSOC does not have processes for consistently identifying such threats, separating them from more current threats, or prioritizing them. Identifying a large number of threatsthe 2011 report identified over 30without prioritizing them makes focusing on those that are most important difficult for decisionmakers. The 2012 report also included many threats, and neither report separates current threats from those that are potentially emerging. To improve FSOCs annual reporting on potential emerging threats, GAO recommends that FSOC develop more systematic approaches that are forward looking and help to prioritize the threats. GAO makes 10 recommendations to strengthen the accountability and transparency of FSOC and OFR's decisions and activities as well as to enhance collaboration among FSOC members and with external stakeholders. Treasury said, as Chairperson, that the council and OFR would consider the recommendations, but questioned the need for FSOC and OFR to clarify responsibilities for monitoring threats to financial stability and stated that OFR expects to share some risk indicators. However, stronger and more systematic actions are still needed in these areas. |
In general, drug abuse is defined by the level and pattern of drug consumption and the severity of resulting functional problems. People who are dependent on drugs often use multiple drugs and have substantial health and social problems, including mental health disorders. One of the many challenges to providing effective treatment for addiction is the complicated nature of the disorder. Unlike other chronic diseases, drug addiction extends beyond physiological influence to include significant behavioral and psychological aspects. For example, specific environmental cues that a drug abuser associates with drug use can trigger craving and precipitate relapse, even after long periods of abstinence. Therefore, drug abusers may enter treatment a number of times, often reducing drug use incrementally with each treatment episode. Despite the potential for relapse to drug use, not all drug users require treatment to discontinue use. For those who require treatment, services are provided in either outpatient or inpatient settings and via two major approaches—pharmacotherapy and behavioral therapy—with many programs combining elements of both. Although abstinence from illicit drug use is the central goal of all drug abuse treatment, researchers and program staff commonly accept reductions in drug use and criminal behavior as realistic, interim goals. these funds support services provided by state and local grantees, which are given broad discretion in how best to use them. In numerous large-scale studies examining the outcomes of drug abuse treatment provided in a variety of settings, researchers have concluded that treatment is beneficial. Clients receiving treatment report reductions in drug use and criminal activity as well as other positive outcomes. The studies have also demonstrated that better treatment outcomes are associated with longer treatment periods but have found that retaining clients in treatment programs is problematic. Comprehensive analyses of the effectiveness of drug abuse treatment have been conducted by several major, federally funded studies over a period of nearly 30 years: the Drug Abuse Treatment Outcome Study (DATOS), the National Treatment Improvement Evaluation Study (NTIES), the Treatment Outcome Prospective Study (TOPS), and the Drug Abuse Reporting Program (DARP). These large, multisite studies—conducted by research organizations independent of the groups operating the treatment programs being assessed—were designed to measure people’s involvement in illicit drug and criminal activity before, during, and after treatment. Although the studies report on reductions in drug use from the year prior to treatment to the year after, most also track a subset of treatment clients for followup interviews over longer time periods. For example, DARP followed clients for as long as 12 years, TOPS for 3 to 5 years following treatment, and DATOS researchers are planning additional followup to determine long-term outcomes. These studies are generally considered by the research community to be the major evaluations of drug abuse treatment effectiveness, and much of what is known about “typical” drug abuse treatment outcomes comes from these studies. or outpatient methadone maintenance—regardless of the drug and client type. DATOS found that, of the individuals in long-term residential treatment, 66 percent reported weekly or more frequent cocaine use in the year prior to treatment, while 22 percent reported regular cocaine use in the year following treatment. Also, 41 percent of this same group reported engaging in predatory illegal activity in the year prior to treatment, while 16 percent reported such activity in the year after treatment. Previous studies found similar reductions in drug use and criminal activity. For example, researchers from the 1980s TOPS study found that across all types of drug abuse treatment, 40 to 50 percent of regular heroin and cocaine users who spent at least 3 months in treatment reported near abstinence during the year after treatment, and an additional 30 percent reported reducing their use. Only 17 percent of NTIES clients reported arrests in the year following treatment—down from 48 percent during the year before treatment. Another finding across these studies is that clients who stay in treatment longer report better outcomes. For the DATOS clients that reported drug use when entering treatment, fewer of those in treatment for more than 3 months reported continuing drug use than those in treatment for less than 3 months. DATOS researchers also found that the most positive outcomes for clients in methadone maintenance were for those who remained in treatment for at least 12 months. Earlier studies reported similar results. Both DARP and TOPS found that reports of drug use were reduced most for clients who stayed in treatment at least 3 months, regardless of the treatment setting. recommended at least 6 months in treatment; for both program types, the median treatment episode was 3 months. Because all of the effectiveness studies relied on information reported by the clients, the level of treatment benefit reported may be overstated. Typically, drug abusers were interviewed before they entered treatment and again following treatment and asked about their use of illicit drugs, their involvement in criminal activity, and other drug-related behaviors.Although this data collection method is commonly used in national surveys and drug abuse treatment evaluations, recent questions about the validity of self-reported drug use raise concerns about this approach. In general, self-reporting is least valid for (1) the more stigmatized drugs, such as cocaine; (2) recent use; and (3) those involved with the criminal justice system. A recent National Institute on Drug Abuse (NIDA) review of current research on the validity of self-reported drug use highlights the limitations of data collected in this manner. According to this review, recent studies conducted with criminal justice clients (such as people on parole, on probation, or awaiting trail) and former treatment clients suggest that 50 percent or fewer current users accurately report their drug use in confidential interviews. As questions have developed about the accuracy of self-reported data,researchers have begun using more objective means, such as urinalysis, to validate such data. For example, NTIES researchers found that 20 percent of those in a validation group acknowledged cocaine use within the past 30 days, but urinalysis revealed recent cocaine use by 29 percent. TOPS researchers reported that only 40 percent of the individuals testing positive for cocaine 24 months after treatment had reported using the drug in the previous 3 days. Because results from the major studies of treatment effectiveness were not adjusted for the likelihood of underreported drug use, reductions in drug use found may be overstated. However, researchers emphasize that client reporting on use of illicit drugs during the previous year (the outcome measure used in most effectiveness evaluations) has been shown to be more accurate than client reporting on current drug use (the measure used to assess the validity of self-reported data). Therefore, they believe that the overall findings of treatment benefits are still valid. Although supplementary data collection, such as hair analysis or urinalysis, can help validate the accuracy of self-reported data, these tools also have limitations. Urine tests can accurately detect illicit drugs for about 48 hours following drug use but do not provide any information about drug use during the previous year. In addition, individual differences in metabolism rates can affect the outcomes of urinalysis tests. Hair analysis has received attention because it can detect drug use over a longer time—up to several months. However, unresolved issues in hair testing include variability across drugs in the accuracy of detection, the potential for passive contamination, and the relative effect of different hair color or type on cocaine accumulation in the hair. We have reported on the limitations of using self-reported data in estimating the prevalence of drug use and concluded that hair testing merited further evaluation as a means of confirming self-reported drug use. Using federal treatment dollars most effectively requires an understanding of which approaches work best for different groups of drug abusers, but on this subject, research findings are less definitive. Although strong evidence supports methadone maintenance as the most effective treatment for heroin addiction, less is known about the best ways to provide treatment services to cocaine users or adolescents. treatment or psychiatric status, can significantly affect the patient’s performance in treatment. Current research generally does not account for these factors in evaluating the effectiveness of alternative approaches for specific groups of drug abusers. Methadone maintenance is the most commonly used treatment for heroin addiction, and numerous studies have shown that those receiving methadone maintenance treatment have better outcomes than those who go untreated or use other treatment approaches. Methadone maintenance reduces heroin use and criminal activity and improves social functioning. HIV risk is also minimized, since needle usage is reduced. As we have previously reported, outcomes among methadone programs have varied greatly, in part because of the substantial differences in treatment practices across the nation. For example, in 1990, we found that many methadone clinics routinely provided clients dosage levels that were lower than optimum—or even subthreshold—and discontinued treatment too soon. In late 1997, a National Institutes of Health consensus panel concluded that people who are addicted to heroin or other opiates should have broader access to methadone maintenance treatment programs and recommended that federal regulations allow additional physicians and pharmacies to prescribe and dispense methadone. Similarly, several studies conducted over the past decade show that when counseling, psychotherapy, health care, and social services are provided along with methadone maintenance, treatment outcomes improve significantly. However, the recent findings from DATOS suggest that the provision of these ancillary services—both the number and variety—has eroded considerably during the past 2 decades across all treatment settings. DATOS researchers also noted that the percentage of clients reporting unmet needs was higher than the percentage in previous studies. researchers have relied on cognitive-behavioral therapies to treat cocaine addiction. Studies have shown that clients receiving cognitive-behavioral therapy have achieved long periods of abstinence and have been successful at staying in treatment. The cognitive-behavioral therapies are based largely on counseling and education. One approach, relapse prevention, focuses on teaching clients how to identify and manage high-risk, or “trigger,” situations that contribute to drug relapse. A study of this approach showed cocaine-dependent clients were able to remain abstinent at least 70 percent of the time while in treatment. Another technique, community reinforcement/contingency management, establishes a link between behavior and consequence by rewarding abstinence and reprimanding drug use. A program using this approach found that 42 percent of the participating cocaine-dependent clients were able to achieve nearly 4 months of continuous abstinence. A third approach, neurobehavioral therapy, addresses a client’s behavioral, emotional, cognitive, and relational problems at each stage of recovery. One neurobehavioral program showed that 38 percent of the clients were abstinent at the 6-month followup. Drug use among teenagers is a growing concern. It is estimated that 9 percent of teenagers were current drug users in 1996—up from 5.3 percent in 1992. Unfortunately, no one method has been shown to be consistently superior to others in achieving better treatment outcomes for this group. Rather, studies show that success in treatment for adolescents seems to be linked to the characteristics of program staff, the availability of special services, and family participation. poor parent supervision—have been identified as risk factors for the development of substance abuse among adolescents. However, NIDA acknowledged in a recently published article that further research is needed to identify the best approach to treating adolescent drug abusers.Similarly, the American Academy of Child and Adolescent Psychiatry acknowledged in its 1997 treatment practice parameters that research on drug abuse treatment for adolescents has failed to demonstrate the superiority of one treatment approach over another. With an annual expenditure of more than $3 billion—20 percent of the federal drug control budget—the federal government provides significant support for drug abuse treatment activities. Monitoring the performance of treatment programs can help ensure that we are making progress to achieve the nation’s drug control goals. Research on the effectiveness of drug abuse treatment, however, is problematic given the methodological challenges and numerous factors that influence the results of treatment. Although studies conducted over nearly 3 decades consistently show that treatment reduces drug use and crime, current data collection techniques do not allow accurate measurement of the extent to which treatment reduces the use of illicit drugs. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you and other members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed its recent report on drug abuse treatment research findings, focusing on: (1) the overall effectiveness of drug abuse treatment; (2) the methodological issues affecting drug abuse treatment evaluations; and (3) what is known about the effectiveness of specific treatments for heroin, cocaine, and adolescent drug addiction. GAO noted that: (1) it found that large, multisite, longitudinal studies have produced considerable evidence that drug abuse treatment is beneficial to the individual undergoing treatment and to society; (2) the studies have consistently found that a substantial proportion of clients being studied report reductions in drug use and criminal activity following treatment; (3) the studies also show that clients who stay in treatment for longer periods report better outcomes; (4) however, drug abuse treatment research is complicated by a number of methodological challenges that make it difficult to accurately measure the extent to which treatment reduces drug use; (5) in particular, growing concerns about the validity of self-reported data, which are used routinely in the major evaluations of drug abuse treatment, suggest that the treatment benefit reported by these studies may be somewhat overstated; (6) in addition, the research evidence to support the relative effectiveness of specific treatment approaches or settings for particular groups of drug abusers is limited; and (7) while one specific treatment approach--methadone maintenance--has been shown to be the most effective treatment for heroin addiction, research on the best treatment approach or setting for cocaine addiction or adolescent drug users is less definitive. |
VBA’s Compensation and Pension Service, located at VA headquarters, formulates the policy and guidance used by the RO staff who receive, develop, and evaluate veterans’ compensation and pension claims. The compensation program pays monthly benefits to veterans with service-connected disabilities (injuries or diseases incurred or aggravated while on active military duty). Veterans with service-connected disabilities are entitled to compensation benefits even if they are working and regardless of the amount they earn. By contrast, the pension program pays monthly benefits to wartime veterans who have low incomes and are permanently and totally disabled for reasons not connected to their service. In compensation cases, the payment amount varies according to degree of disability; in pension cases, the amount varies according to financial need. When veterans are unable to manage their affairs, benefit payments are made to guardians who serve as their fiduciary representatives. Adjudicating an original disability claim involves two basic functions—“authorization” and “disability rating.” Authorization involves obtaining records from the military services and information from the veterans, such as medical records and information on income and dependents. Disability rating involves establishing whether a veteran’s impairment is service-connected and, if so, evaluating the veteran’s degree of disability. VBA considers claims requiring a disability rating to be the core workload of the compensation and pension program, and as a group, cases requiring a disability rating are considered to be the most error-prone in the program. In order to rate (or evaluate) a veteran’s disability, ROs often determine that they need medical evidence in addition to evidence obtained from the veteran’s physicians and other medical providers. In such cases, they send veterans to the Veterans Health Administration (VHA) for physical or mental examinations by VHA physicians. From the medical evidence, ROs rate a veteran’s disability using VA’s Schedule for Rating Disabilities, which lists physical and mental conditions and assigns a disability rating to each condition. Under this schedule, the degree of disability is expressed in 10-percent increments up to 100-percent disability. A veteran can also receive a “zero-percent” disability rating, which means the veteran’s condition is service-connected but not severe enough to qualify for compensation payments on the basis of the medical criteria specified in the rating schedule. If the veteran’s condition later worsens, he or she may reapply, asking VA to increase the rating from zero to 10 percent or more. Evaluating the degree of disability for some conditions, such as mental impairments, can require adjudicators to make subjective judgments that are not always clear-cut. For veterans with multiple impairments, the RO must rate each impairment separately and then combine them into a composite rating. After a veteran is placed on the rolls, his or her condition or circumstances may change in ways that can result in adjustments to the RO’s initial decision. For example, a veteran may file a claim for an increase in degree of disability if his or her medical condition deteriorates. Or nonmedical issues may arise that require an adjustment to the initial decision but do not require a disability rating in order to make the new decision. Such cases could arise from changes in the status of the veteran’s dependents or changes in the income of a veteran receiving pension benefits. After the RO notifies the veteran of its decision, the veteran, if dissatisfied, may ask for a hearing before an RO hearing officer. The veteran also may file a notice of disagreement with the RO and then file an appeal asking for a review of the RO’s decision by the Board of Veterans’ Appeals, which makes VA’s final decisions on appeals on behalf of the Secretary. The Board may conduct a hearing if the veteran requests one. In deciding appeals, the Board can grant benefits (reverse the RO decision), deny benefits (affirm the RO decision), or remand (or return) the case to the RO to develop further evidence and reconsider the claim. After further development of a remanded claim, the RO either awards the claim or returns it to the Board for a decision. Before 1989, the Board’s decisions on appeals were final. In that year, however, the Court of Veterans Appeals—established by the Veterans’ Judicial Review Act of 1988 (P.L. 100-687, Nov. 18, 1988)—began to hear cases. As a result, the Board is no longer the final step in the claims adjudication process. When a veteran disagrees with a decision of the Board, the veteran may now appeal to the Court, which is independent of VA. Additionally, either veterans or VA may appeal decisions of the Court of Veterans Appeals to the Court of Appeals for the Federal Circuit. Since veterans began appealing Board decisions to the Court of Veterans Appeals, according to a court official, the Court has remanded more than 4,500 decisions back to the Board for further development and reconsideration. According to the same official, this represents about 59 percent of the Board’s decisions that were appealed to the Court, excluding dismissed cases. In turn, ROs have felt the repercussions of these Court decisions as evidenced by significant increases in the Board’s reversals and remands of appealed RO decisions. Before the advent of the Court, the Board historically had annually awarded benefits in 12 to 14 percent of appealed RO decisions and had annually remanded another 12 to 24 percent back to ROs for further development. However, in the years since the advent of the Court, the Board has annually awarded benefits in about 14 to 20 percent of the cases it reviewed and remanded another 31 to 51 percent back to ROs for further development. Despite these increases in awards and remands by the Board, VBA had continued to report—until STAR was implemented—that ROs were accurately processing compensation and pension claims more than 95 percent of the time. (See app. I for more details on the reversal and remand rates of the Court and the Board and on the accuracy rates reported by VBA.) VBA considers a disability claim to have been accurately processed if basic eligibility has been determined correctly, the case file contains all required medical and nonmedical documentary evidence, the RO’s decision on service-connection and the rating given to each impairment are correct, the payment amount is correct, and the RO properly notified the veteran of the outcome of his or her claim. Under the accuracy measurement system that was in operation from fiscal year 1992 through fiscal year 1997, VBA headquarters annually reviewed approximately 100 cases randomly selected from the cases completed by each of 57 ROs. These cases were selected from the entire universe of compensation and pension work products completed by the ROs. Using this procedure, VBA produced a national accuracy rate with a reasonable level of statistical precision. While each year’s sample was too small for VBA to produce accuracy rates for each RO with a reasonable level of statistical precision, VBA required each RO to self-review 300 to 900 cases annually, depending on the size of the RO. These RO self-reviews were to provide ROs with information needed to improve quality, not to compute accuracy rates for measuring performance. Statistical precision refers to the amount of uncertainty in an estimate that results from sampling variability at a given level of confidence. For example, if a sample that has a 95-percent confidence level and a precision level of plus or minus 5 percentage points yields an estimated accuracy rate of 70 percent, this means that one can be 95-percent confident that the true accuracy rate is between 65 percent and 75 percent. accountability for the SDN’s overall performance of all work assigned to it. In meeting the requirements of the Results Act, VBA headquarters will measure each SDN’s performance, and each SDN will assess the performance of its ROs. This measurement will be made on the basis of five performance factors: claims-processing accuracy (as determined by STAR), timeliness of claims processing, unit cost, customer satisfaction, and employee satisfaction and development. The new STAR system represents an important step forward by VBA in measuring the accuracy of compensation and pension claims processing and in providing data to identify error-prone cases and correct the causes of errors, including those that result in reversals and remands by the Board of Veterans’ Appeals. Compared with the previous accuracy measurement system that VBA had been using since 1992, the STAR system is a step forward because it focuses more on RO decisions that are likely to contain claims-processing errors, uses a stricter method for computing accuracy rates, provides more data on the performance of additional organizational levels within VBA, collects more data on errors, and stores the results of more accuracy reviews in a centralized database for further review and analysis. Whereas VBA had been reporting more than 95-percent accuracy under the previous accuracy measurement system, VBA, in its pilot test of STAR, reported that only 64 percent of veterans’ claims were processed accurately. A primary reason for this difference is that the pilot test focused only on the most complex and more error-prone RO work products, those involving disability rating decisions. By contrast, the previous system drew its sample of cases from the entire universe of RO work products, including those not requiring an assessment of disability and, therefore, less error-prone. The newly implemented STAR system continues to focus on claims that involve disability ratings, but it also includes a sample of cases that address issues typically not requiring disability ratings and a separate sample of cases involving guardianship issues for veterans unable to represent themselves. Separate accuracy rates are computed for each of these two other samples. Another reason that the STAR pilot test found an accuracy rate of 64 percent rather than 95 percent as reported under the previous system is STAR’s stricter accuracy rate computation method. Under the previous system, VBA categorized each error into one of three areas of the claims adjudication process: (1) case control and development, (2) decision elements, and (3) notification to the veteran. Thus, if a case had one error, VBA would record this error under the appropriate area and show the two other areas as error-free. After reviewing all cases, VBA computed separate accuracy rates for each of the three claims adjudication areas and then determined an overall accuracy rate by calculating the arithmetic mean (or average) of the three accuracy rates. Under STAR, however, VBA does not compute separate accuracy rates for the three areas of the claims adjudication process. If a case has any errors in any area of the claims adjudication process, the entire case is counted as incorrect for accuracy rate computation purposes. This approach tends to result in a lower accuracy rate than under the previous system. (See app. II for a hypothetical example demonstrating how STAR’s computation method can result in a lower accuracy rate.) In addition to focusing more on error-prone RO decisions and using a stricter accuracy rate computation method, STAR provides accuracy rates with reasonable statistical precision not only for the nation as a whole but also for each SDN. Under the previous system, VBA headquarters had reviewed about 5,700 cases annually. Its sampling methodology allowed VBA to produce an accuracy rate with reasonable statistical precision for the nation as a whole. Under STAR, VBA headquarters will review about 7,400 cases annually. Its sampling methodology will enable VBA to provide accuracy rates with reasonable statistical precision for the nation and each SDN for the sample of cases requiring disability ratings and the sample of cases typically not requiring such ratings (see app. II for SDN sample sizes and statistical precision data). However, the sample of cases involving guardianship issues will be too small to provide the same level of statistical precision. VBA originally considered designing STAR so that VBA headquarters also could produce accuracy rates for each RO but dropped this option because it would have required VBA headquarters to review an additional 50,000 cases annually. Instead, VBA opted to require each RO to review samples of its own work products using STAR review procedures. As in the headquarters review, these RO self-reviews will produce accuracy rates with reasonable statistical precision for the sample of cases requiring disability ratings and the sample of cases typically not requiring such ratings. However, the sample of cases involving guardianship issues will be too small to produce accuracy rates with the same level of statistical precision. Nationwide, the ROs will review about 44,000 randomly selected cases (see app. II for RO sample sizes and statistical precision data). VBA estimates that every 1,000 cases in these samples require about 1.0 full-time equivalent review staff per year. STAR is also an improvement over the previous accuracy measurement system because it provides more precise information on the inaccuracies it identifies. Under the previous system, VBA’s database essentially captured only whether a decision did or did not contain errors. By contrast, STAR requires reviewers to answer a standardized series of questions about whether the RO’s actions and decisions were correct or incorrect in various steps of claims processing. The reviewers enter their answers to these questions, along with brief narrative comments, in the STAR database. In addition, because the need for further development of evidence is a primary reason that the Board of Veterans’ Appeals remands many cases to ROs, STAR asks reviewers to identify deficient evidence categories, such as private medical evidence, VA medical center records, and service records. Also, because the Board remands many cases to ROs to obtain further medical examinations by VHA physicians, STAR asks reviewers to indicate whether deficiencies in medical evidence supporting the decision relate to VHA medical examinations. These data on deficiencies in evidence are entered in the STAR database. The database also identifies cases involving five special conditions that have medical implications: prisoner of war, radiation exposure, Gulf War veteran, Agent Orange exposure, and posttraumatic stress syndrome. Additionally, STAR’s database captures the results of accuracy reviews conducted by both VBA headquarters and the ROs, whereas under the previous system, VBA’s database captured only the results of accuracy reviews conducted by VBA headquarters. VBA planned to implement in February 1999 a new centralized database on its internal network (intranet) system that will permit both VBA headquarters and the ROs to input the results of all STAR reviews into the database. Capturing RO data will enrich the data available to analyze error trends, and both VBA headquarters and the ROs will have access to the full complement of data through the intranet. Although STAR represents a significant step forward in VBA’s ability to measure claims-processing accuracy and identify error-prone cases, VBA can take further steps to improve this ability. These steps involve collecting additional data for identifying and correcting error-prone cases and addressing vulnerabilities in the integrity of accuracy reviews. Even with the improvements provided by STAR, VBA’s ability to identify error-prone cases and target corrective actions is constrained by the limited data that it captures on the medical characteristics of claimants whose claims are processed incorrectly and on why medical evidence is deficient. Data captured on claimants’ medical characteristics is currently limited to identifying whether a veteran was a prisoner of war, served in the Gulf War, or had posttraumatic stress syndrome, radiation exposure, or Agent Orange exposure. More detailed medical characteristics data could help pinpoint the specific types of claims in which errors occur. Also, although STAR captures data on whether medical evidence and medical examinations are adequate, it does not record statistical data identifying why reviewers found the evidence or examinations supporting RO final decisions to be deficient. Such data also could help pinpoint the types of corrective actions that need to be taken to improve the accuracy of RO decisions. Limited studies by VBA demonstrate how capturing additional data in the STAR database on medical issues could help VBA focus on corrective actions that can reduce claims-processing errors and in turn reduce remands from the Board of Veterans’ Appeals. In 1996, VBA and the Board of Veterans’ Appeals jointly conducted a limited study of remanded cases and reported that inadequate medical examinations were the most frequent reason for remands and that a majority of the remanded cases involved the need for specialty examinations, such as orthopedic, psychiatric, neurologic, audiologic, and ear-nose-throat examinations. Also, in 1996, the Milwaukee RO reviewed claims that were awarded by the RO’s hearing officers after the claims were initially denied. Of the cases in which the RO’s hearing officers reversed the initial decision, the Milwaukee RO captured data on the specific conditions, such as orthopedic impairments, that were involved in significant numbers of cases, and using such data, the RO identified specific corrective actions. According to Milwaukee RO officials, this helped reduce the RO’s remand rate from the Board of Veterans’ Appeals. From fiscal year 1995 to fiscal year 1998, the Milwaukee RO reduced its remand rate from about 40 percent to about 21 percent, one of the lowest remand rates in the nation. SSA, which administers the largest federal disability program, has a quality assurance system that captures detailed data on claimants’ medical characteristics and on weaknesses in evidence. SSA has found that such data are helpful in identifying error-prone cases and targeting corrective actions. For each case reviewed, SSA’s system captures data on the specific body systems involved, such as musculoskeletal, respiratory, cardiovascular, and mental systems. Further, using codes from the International Classification of Diseases, SSA’s system identifies each claimant’s specific impairments. Additionally, when medical evidence is judged not adequate, SSA’s system records the specific medical specialty area in which evidence was lacking, such as orthopedic, psychiatric, and neurologic areas, and it identifies the specific type of test, study, or other medical evidence that was lacking. Such data, according to an SSA quality assurance official, not only helps to identify error-prone cases but can pinpoint specific evidentiary weaknesses for cases involving specific body systems or impairments. Also, this official stated that spending resources up front to capture such data can reduce the need to conduct time-consuming special studies later to understand why certain types of cases are being processed incorrectly. According to the SSA quality assurance unit, the depth of the data collected from quality assurance reviews also enables it to assess the implementation of new or revised policies, perform analyses and make recommendations for operational and systems corrective actions, and provide broad levels of management information, such as information by categories of impairments. VBA agrees that the STAR system deployed at the beginning of fiscal year 1999 provides a sound start for beginning to address claims-processing accuracy issues. VBA officials acknowledge, however, that they realized when STAR was deployed that continuous improvement should be sought to enhance its effectiveness. These VBA officials stated that VBA is open to considering the collection of additional data in order to enhance STAR. To ensure integrity in the operation of government programs, standards for internal controls call for separation of key duties, and standards for performance audits call for those who review and evaluate a program’s performance to be organizationally independent of the program’s managers. Under STAR, however, the RO staff who review the accuracy of RO decisions are themselves responsible for making such decisions, and they report to RO managers responsible for claims processing. Such an arrangement does not meet the standard for separation of duties, nor does it meet the independence standard. Both the RO reviewers and their managers have an inherent self-interest in having as high an accuracy rate as possible. This self-interest derives from the fact that accuracy is one of five factors that determine RO performance scores, which VBA measures to meet Results Act requirements. Thus, without adequate separation of duties or adequate independence for RO reviewers, the integrity of both the STAR review process and the resulting accuracy rates and performance data reported under the Results Act are called into question. The potential effect of impaired objectivity on performance data is exemplified by findings reported by VA’s Inspector General in September 1998. Because of concern about the accuracy of data used to meet Results Act requirements, the Inspector General examined the integrity of certain data used for Results Act reports. In this review, the Inspector General found instances in which RO staff had manipulated data on the timeliness of RO claims processing in order to make performance appear to be better than it actually was. The Inspector General found that weaknesses in internal controls had contributed to the lack of integrity in the timeliness data reported under the Results Act. During our review, some RO staff made comments on the integrity of accuracy reviews that parallel the findings of the Inspector General. These RO staff told us that ROs are biased against identifying their own errors. They also stated that ROs in the past, after selecting samples of cases to review, had sometimes “sanitized” or fixed problems in the case files before the cases underwent quality review. No data are available to indicate the extent to which RO reviewers might attempt to overlook errors and sanitize case files to conceal errors in the approximately 44,000 cases that ROs review annually under STAR. However, to the extent that such efforts may occur, the accuracy rates reported by the ROs would be overstated. Furthermore, any attempts by ROs to conceal errors and overstate their accuracy rates could also result in an overstatement of the accuracy rates that VBA reports for SDNs and the nation. This vulnerability in VBA’s data exists because the sample of 7,400 cases that VBA reviews annually is selected directly from the approximately 44,000 cases reviewed by the ROs. VBA reviews its sample of 7,400 cases after the ROs complete their own reviews of these same cases. VBA believes that it can detect most attempts to sanitize case files because such attempts would likely require extensive backdating of corrected case file documents, which VBA believes would be difficult to conceal. VBA acknowledges, however, that it cannot ensure that it would detect every such attempt in the cases it reviews. To the extent that VBA may not detect all such attempts, the accuracy rates it reports for SDNs and the nation would be overstated. Ensuring the integrity of accuracy data will require that staff who review claims-processing accuracy neither are responsible for claims processing nor report to program managers responsible for claims processing. VBA stated that resource restrictions prevent establishing independent accuracy review units either in the ROs or at VA headquarters; however, unless VBA provides adequate separation of duties and organizational independence for accuracy reviewers, potential questions about the integrity of accuracy-related performance data will likely persist. By contrast, we found that SSA has quality assurance units at its headquarters and in each of its 10 regional offices that are organizationally independent of program management. The independent quality assurance unit in SSA headquarters has overall responsibility for assessing disability claims-processing accuracy. To do this, it oversees the operation of the independent regional quality assurance units that review the accuracy of statistically random samples of the disability decisions rendered by 54 state agencies that process disability claims for SSA. VBA contends that it would be impractical to establish independent accuracy review units in VBA’s 58 ROs, many of which are relatively small in size. Establishing independent STAR units in ROs would be more practical if only a relatively small number of large ROs processed all compensation and pension claims. Under the present structure, however, a more workable long-term solution could involve establishing an independent headquarters unit responsible for conducting all reviews used to determine the accuracy rates that go into the calculation of overall performance scores for VBA headquarters, SDNs, and ROs. VBA has set a goal of achieving a claims-processing accuracy rate of 93 percent by fiscal year 2004. This would be almost 30 percentage points higher than the baseline rate of 64 percent established in the 1998 pilot test of STAR. VBA acknowledges, however, that the STAR system on its own cannot ensure that VBA will meet its accuracy goal. Beyond any improvements that VBA might make in the STAR system, VBA acknowledges that there are challenges it must address successfully in order to meet its goal for improving accuracy. These challenges include effectively establishing accountability for accuracy improvement and developing an effective training program for the current and future workforce. In May 1998, VBA identified several root causes of quality problems in processing disability compensation and pension claims. One such cause was a lack of employee accountability. VBA plans to focus on quality and accountability with a quality assurance system that provides clear and fair accountability at all organizational levels. To accomplish this goal, VBA is implementing the “balanced scorecard” that scores the performance of VBA headquarters, SDNs, and ROs on the basis of five performance factors: claims-processing accuracy (as determined by STAR), timeliness of claims processing, unit cost, customer satisfaction, and employee satisfaction and development. With the goal of achieving a 93-percent accuracy rate by fiscal year 2004, VBA believes its balanced scorecard approach will, among other things, drive organizational change, provide employees with feedback on measures they can influence, and link the performance appraisal and reward systems to organizational performance measures. VBA plans to use the balanced scorecard to give RO managers incentives to work as teams in their SDNs with a focus on meeting balanced scorecard performance measures, including accuracy. The extent to which this strategy will improve accountability and accuracy cannot yet be determined. In our discussions with RO staff, many stated that VBA had not provided adequate training for claims adjudicators. They stated, for example, that there was confusion in the ROs on how to process cases because of apparent conflicts between decisions of the Court of Veterans Appeals and VA’s regulations and guidance. They also stated that too much of their training was determined locally, resulting in inconsistent training among the ROs. VBA acknowledged shortcomings in training and stated that it had not fared well in preparing its workforce, with a resultant decline in technical accuracy. VBA acknowledged the need for an effective, centralized, and comprehensive training program that provides the background necessary for its decisionmakers to render decisions according to the statutes and regulations mandated for claims adjudication. Such training is important not only for current employees but also for the many new employees whom VBA will have to hire to replace retiring employees. According to VBA, it may lose up to 30 percent of its workforce to retirement by fiscal year 2003. To develop a training program for RO staff, VBA plans to identify the necessary employee skills and work processes for every decisionmaking position, implement skill certification or credentialing for these positions, and implement performance-based training connected to measurable outcomes. VBA has already developed a computer-based training module for processing appeals and is working on modules for original disability claims, service-connected death indemnity benefits, and pensions. VBA also plans to produce additional modules, including one for training RO staff when they first assume disability rating responsibilities. Whether these training efforts will enable VBA to meet its accuracy goal cannot yet be determined. Although VBA had been reporting until recently that ROs were processing claims accurately more than 95 percent of the time, the STAR pilot test in fiscal year 1998 revealed that the accuracy rate for decisions involving disability ratings was much lower, about 64 percent. This confirmed that VBA needs to give more attention to ensuring that ROs make the correct decision the first time. Making the correct decision the first time would mean that veterans could avoid having to make unnecessary appeals and would not be unnecessarily delayed in receiving benefits owed them. Although the new STAR system represents genuine improvement in VBA’s ability to measure accuracy and identify error-prone cases, VBA needs to make further progress in collecting data for identifying difficult cases, assessing adjudication difficulties, and developing corrective actions. Despite its newly implemented STAR system, without further refinements in the data collected on errors, significant inaccuracies are likely to persist because VBA is constrained in its ability to pinpoint error-prone cases and identify corrective actions. Moreover, the data produced from STAR reviews will be suspect because of weaknesses in internal controls and lack of adherence to performance audit standards. We believe this can potentially undermine progress made under STAR. To further strengthen VBA’s ability to identify error-prone cases, ensure the integrity of accuracy rate-related performance data reported under the Results Act, and keep the Congress informed about VBA’s progress in addressing challenges that must be met in order to improve accuracy, we recommend that the Secretary of the Department of Veterans Affairs direct the Under Secretary for Benefits to take the following actions. For RO disability decisions found to be in error, revise STAR to collect more detailed medical characteristics data, such as the human body systems, the specific impairments, and the specific deficiencies in medical evidence involved in these disability claims, so that VA can identify and focus corrective actions on specific problems that RO adjudicators have in correctly evaluating certain types of medical conditions or in correctly determining whether medical evidence is adequate to make a decision. Implement a claims-processing accuracy review function that meets the government’s internal control standard on separation of duties and the program performance audit standard on organizational independence. In the annual Results Act reports, inform the Congress on VBA’s progress in (1) establishing stricter employee accountability for the achievement of performance goals and (2) developing more effective training for claims adjudicators. In commenting on our draft report, VA stated that it found the report to be a fair and balanced appraisal. VA concurred that its process for assessing claims accuracy is critical and stated that continued urgent action is required for VA to meet its own and its stakeholders’ expectations. VA stated that our recommendations were generally constructive but had concern about our first two recommendations. The first recommendation in our draft report was that VA “revise STAR to include the collection of more detailed medical characteristics data on the human body systems, and specific impairments involved in disability claims as well as data on specific deficiencies in medical evidence and examinations.” VA interpreted our recommendation to mean that STAR should collect data on the quality of examinations conducted by VHA. However, this was not the intent of our recommendation. The intent was for STAR to collect additional data that would help VA better identify (1) specific types of medical conditions that RO adjudicators have difficulty evaluating correctly and (2) specific types of inadequacies in medical evidence that are most prevalent in incorrect decisions. This would provide a means for VA to develop corrective actions addressing the causes of errors in the evaluation of medical conditions and of failure to collect adequate medical evidence to make a supportable decision. We clarified the recommendation and our discussion of this issue in our report. The second recommendation in our draft report was that VA “implement a claim processing accuracy review function that meets the government’s internal control standard on separation of duties and the program performance audit standard on organizational independence.” VA’s primary concern about this recommendation was that current budget constraints make it impractical to adopt approaches that would fully satisfy these standards—for example, establishing a single, large centralized review unit to assess all quality issues, including individual RO quality. However, while current budget constraints may present problems in finding ways to fully meet the standards immediately, we believe meeting these standards as expeditiously as possible should be a continuing priority in VA’s future planning process. Until the standards are met, the integrity of VA’s claims-processing accuracy data will remain questionable. As VA stated in its comments, “Effective reviews require an organizational commitment to dedicate the necessary resources to the review process.” With regard to the second recommendation, VA also stated that while the STAR system is a compromise reflecting resource constraints, it has some distinct advantages compared with quality reviews performed by a consolidated, independent review unit. VA cited the value of having reviews performed by local staff in each RO. Our recommendation would not preclude local reviews, which we agree are important. Even if a single, central unit were established for the purpose of assessing the degree to which each RO processes claims accurately, it would still be critical for local RO management to gather detailed local data on claims processing to understand fully how to correct local processing problems. This function, however, is different from local reviewers conducting accuracy reviews of their own RO’s decisions, which our recommendation is intended to eliminate. VA also stated that it is concerned that a “permanent” independent review staff would become stagnant. We disagree because the staff who perform reviews would not have to be permanently assigned to the unit but could instead be rotated to avert staff stagnation. VA furthermore expressed concern about the cost and increased potential for losing active case files that would result from mailing many more thousands of case files from the 58 ROs to a central review site. This concern, however, does not negate the need to meet the standards for separation of duties and organizational independence. Also, the concern could potentially be lessened by other measures. For example, the Congressional Commission on Servicemembers and Veterans Transition Assistance in its January 1999 report applauded VBA for consolidating the administration of its education and loan programs into fewer than 10 ROs but pointed out that VBA has made no effort to make a similar consolidation of the adjudication of compensation claims. If VBA were ever to consolidate the adjudication of claims into a few relatively large ROs, it would be more practical to locate an independent STAR unit in each of these ROs to review the accuracy of cases each one processed. Each RO STAR unit would then need to mail to a central review unit only a relatively small random sample of the cases it reviewed so that the central unit could ensure the reviews’ appropriateness and consistency. VA’s comments are printed in appendix III. As agreed with your office, we plan no further distribution of this report until 7 days from its date of issue, unless you publicly announce its contents earlier. We will then send copies to the Chairman of the House Committee on Veterans’ Affairs, the Secretary of the Department of Veterans Affairs, other congressional committees, and others who are interested. We will also make copies available to others upon request. If you have any questions about this report, please call me at (202) 512-7101 or Irene P. Chu, Assistant Director, at (202) 512-7102. Other major contributors to this report were Ira B. Spears, Mark Trapani, Connie D. Wilson, Paul C. Wright, and Deborah L. Edwards. Before the Veterans Benefits Administration (VBA) implemented the Systematic Technical Accuracy Review (STAR) measurement system, it reported that regional offices (RO) accurately processed and adjudicated disability compensation and disability pension claims more than 95 percent of the time during fiscal years 1993-97 (see table I.1). The validity of such high accuracy rates, however, seemed inconsistent with the results of decisions made by the Board of Veterans’ Appeals when veterans appealed unfavorable RO decisions. The Board of Veterans’ Appeals awarded benefits or remanded cases for further development more than 60 percent of the time when veterans appealed RO decisions during fiscal years 1993-97 (see table I.2). Only a small proportion of RO decisions are appealed to the Board. For example, in fiscal year 1997, veterans filed notices of disagreement in about 14 percent of the disability compensation claims processed by ROs (see table I.3). The number of cases appealed, however, is less than the number of cases in which veterans file a notice of disagreement with VA. In some cases, after notices of disagreement are filed, ROs award the benefits sought, or some veterans decide not to continue with their appeals if the RO again denies benefits at this point. In fiscal year 1997, the Board received initial substantive appeals equivalent to about 5 percent of all disability compensation claims processed by ROs. Disability compensation claims processed by ROs (original and reopened) Under the pre-STAR accuracy measurement system, VBA annually reviewed approximately 5,700 compensation and pension cases, or approximately 100 cases randomly selected from the cases completed by each of 57 ROs.These cases were selected from the entire universe of compensation and pension work products completed by the ROs. Using this procedure, VBA annually produced, with a reasonable level of statistical precision, a national accuracy rate for the entire body of compensation and pension work done by the ROs during the prior year. The sample of approximately 100 cases selected for each RO was too small to produce accuracy rates for each RO with a reasonable level of statistical precision. However, VBA required each RO to self-review a sample of 300 to 900 cases annually, depending on the size of the RO. These RO self-reviews were intended to provide the RO with information needed to improve quality, not to compute accuracy rates for VBA to measure performance. Under STAR, VBA annually reviews 7,371 compensation and pension cases for the nine service delivery networks (SDN), and the 57 ROs self-review about 44,000 cases. These cases are made up of three separate samples: (1) rating-related work products; (2) authorization work products that require significant development, review, and administrative decision or award action but may not involve any rating-related action; and (3) principal guardianship files, referred to as fiduciary cases. (See table II.1 for SDN and RO sample sizes.) For rating-related work products and authorization work products that typically do not require rating-related action, the sampling methodology will allow VBA to produce accuracy rates with a reasonable level of statistical precision for the nation and each SDN. However, the sample of fiduciary cases is too small to provide accuracy rates with the same level of statistical precision. Similarly, for cases that are self-reviewed by ROs, the sampling methodology will allow each RO to produce accuracy rates with a reasonable level of statistical precision for rating-related work products and authorization work products typically not requiring ratings. Again, however, the sample of fiduciary cases is too small to provide accuracy rates with the same level of statistical precision. The totals in this row represent total sample size for each SDN and each RO. The totals in this row represent national total sample size for 9 SDNs and 57 ROs. For each case reviewed under the previous accuracy measurement system, VBA categorized each error into one of three areas of the claims adjudication process: (1) control and development of the claim, (2) decision elements, and (3) notification to the veteran. Thus, for example, if a case had only one error, VBA would record this error under the appropriate area of the claims adjudication process and would show the two other areas as error-free for that case. After all cases were reviewed, VBA would compute an accuracy rate for each of the three areas in the claims adjudication process. To arrive at an overall accuracy rate for the three areas combined, VBA computed their arithmetic mean (or average). For example, table II.2 shows a hypothetical outcome for accuracy reviews of 10 cases. Under the control and development area, one case has an error (a 90-percent accuracy rate); under the decision element area, two cases have errors (an 80-percent accuracy rate); and under the notification area, one case has an error (a 90-percent accuracy rate). For this sample of 10 cases as a whole, the overall accuracy rate is the average of these three accuracy rates, or 86.6 percent. For each case reviewed under STAR, however, VBA does not compute separate accuracy rates for the three areas of the claims adjudication process. If a case has any errors in any area of the claims adjudication process, the entire case is counted as incorrect for accuracy rate computation purposes. This approach tends to result in a lower accuracy rate than under the previous system. For example, in the hypothetical sample of 10 cases shown in table II.2, 3 cases would be counted as incorrect under STAR because they contain at least one processing error, and the resultant accuracy rate for the sample would be only 70 percent (7 out of 10 cases with no errors = 70-percent accuracy rate), compared with the overall accuracy rate of 86.6 percent calculated under the previous system. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Veterans Affairs' (VA) regional offices' (RO) accuracy in processing disability claims, focusing on: (1) the extent of improvements made by the Systematic Technical Accuracy Review (STAR) system in measuring claims-processing accuracy; (2) additional efforts needed to strengthen the system; and (3) challenges the Veterans Benefits Administration (VBA) faces in meeting goals for improving claims-processing accuracy. GAO noted that: (1) the new STAR system represents an important step forward by VBA in measuring the accuracy of compensation and pension claims processing; (2) compared with the previous system, STAR focuses more on RO decisions that are likely to contain processing errors, uses a stricter method for computing accuracy rates, provides more data on the performance of organizational levels within VBA, collects more data on processing errors, and stores more accuracy review results in a centralized database; (3) even so, VBA can further strengthen STAR's ability to identify error-prone cases and claims-processing weaknesses so that it can take corrective actions; (4) VBA needs to better pinpoint error-prone cases and weaknesses in the development of evidence by collecting more specific data on the types of medical characteristics and deficiencies in medical evidence that are most prevalent in incorrect decisions; (5) VBA can also better address vulnerabilities in the integrity of accuracy data; (6) STAR reviewers in ROs do not have sufficient separation of duties or adequate independence to meet government standards for internal controls or program performance audits; (7) these shortcomings raise concern about the integrity of STAR accuracy data, which are a key factor in the performance measurement system designed by VBA to meet the requirements of the Government Performance and Results Act (GPRA) of 1993; (8) while such system improvements are necessary, they alone are not sufficient for VBA to meet its goal for improving accuracy; (9) using the STAR pilot test's 64-percent accuracy rate as a baseline, VBA's goal is to achieve a 93-percent accuracy rate by fiscal year 2004; (10) VBA faces management challenges that it must address successfully in order to meet its accuracy improvement goal; (11) to do this, VBA recognizes that: (a) its newly implemented performance measurement system must hold program managers accountable for performance; and (b) the training program under development must effectively train the current RO workforce as well as the many new employees who will have to be hired in the coming decade to replace those who retire; and (12) it is too early to determine whether VBA's efforts to meet these challenges will be successful. |
Mr. Chairman and Members of the Committee: I am pleased to be here today to discuss our observations on the Department of Justice’s August draft of its strategic plan. The Government Performance and Results Act of 1993 (the Results Act) requires that all executive branch agencies submit their plans to Congress and the Office of Management and Budget (OMB) by September 30, 1997. My statement focuses on Justice’s August draft strategic plan, which builds on our July comments regarding Justice’s February draft plan. Specifically, my statement will focus on the August plan’s compliance with the Act’s requirements and on the extent to which it covered crosscutting program activities, management challenges, and Justice’s capacity to provide reliable performance information. In summary, Justice’s February draft of its strategic plan was incomplete in that of the six elements required by the Act, three—the relationship between long-term goals/objectives and the annual performance plans, the key factors external to Justice that could affect Justice’s ability to meet its goals, and a program evaluation component—were not specifically identified in the draft plan. The remaining three elements—the mission statement, goals and objectives, and strategies to achieve the goals and objectives—were discussed. The August plan includes two of the three missing elements but the plan does not include a required discussion on a third element—how the long-term goals and objectives are tied to Justice’s annual performance plans. In addition, the revised plan would better meet the purposes of the Act if it provided more complete coverage of crosscutting programs, management challenges, and performance information. In the 1990s, Congress put in place a statutory framework to address long-standing weaknesses in federal government operations, improve federal management practices, and provide greater accountability for achieving results. This framework included as its essential elements financial management reform legislation, information technology reform legislation, and the Results Act. In enacting this framework, Congress sought to create a more focused, results-oriented management and decisionmaking process within both Congress and the executive branch. These laws seek to improve federal management by responding to a need for accurate, reliable information for congressional and executive branch decisionmaking. This information has been badly lacking in the past, as much of our work has demonstrated. Implemented together, these laws provided a powerful framework for developing fully integrated information about agencies’ missions and strategic priorities, data to show whether or not the goals are achieved, the relationship of information technology investment to the achievement of those goals, and accurate and audited financial information about the costs of achieving mission results. The Results Act focuses on clarifying missions, setting goals, and measuring performance toward achieving those goals. It emphasizes managing for results and pinpointing opportunities for improved performance and increased accountability. Congress intended for the Act to improve the effectiveness of federal programs by fundamentally shifting the focus of management and decisionmaking away from a preoccupation with tasks and services to a broader focus on results of federal programs. program evaluations were used to establish and revise strategic goals and a schedule for future program evaluations. Justice’s strategic plan is organized around what Justice has identified as its seven core functions: (1) investigation and prosecution of criminal offenses; (2) assistance to state and local governments; (3) legal representation, enforcement of federal laws, and defense of federal government interests; (4) immigration; (5) detention and incarceration; (6) protection of the federal judiciary and improvement of the justice system; and (7) management. Justice’s February draft of its strategic plan was incomplete and did not provide Congress with critical information for its consultations with Justice. Justice’s August version added two of the three required elements that were missing in the February plan. As a result, the August plan includes, to some degree, a discussion on five of the six required elements—a mission statement, goals and objectives, key external factors, a program evaluation component, and strategies to achieve the goals and objectives. The August plan does not include a required discussion of a sixth element—the relationship between Justice’s long-term goals/objectives and its annual performance plans. “Our mission at the United States Department of Justice is to enforce the law and defend the interests of the U.S. according to the law, provide Federal leadership in preventing and controlling crime, seek just punishment for those guilty of unlawful behavior, administer and enforce the Nation’s immigration laws fairly and effectively and ensure fair and impartial administration of justice for all Americans.” Justice’s mission statement covers six of the seven core functions that Justice identified but does not specify the detention and incarceration function, which is one of Justice’s largest budget items. The plan does incorporate the detention and incarceration function in the discussion of goals and objectives and in its strategies to achieve those goals and objectives. Justice officials said that it was their intent to cover the detention and incarceration function by the phrases “seek just punishment . . .” and “ensure fair and impartial administration of justice . . .” While we agree that mission statements may vary in the extent to which they specify particular activities, we believe that it would be helpful to explicitly include the detention and incarceration function in this case. Our belief is based on Justice’s decision to specify all of the other major functions in its mission statement and our concern that the Department’s stakeholders may not interpret the phrases cited by Justice officials as indicating that the detention and incarceration component is part of its mission. Justice’s goals and objectives cover its major functions and operations and are logically related to its mission. However, they are not as results oriented as they could be and some focus on activities and processes. For example, one set of results-oriented goals involves reducing violent, organized, and gang-related crime; drug-related crime; espionage and terrorism; and white collar crime. However, goals in other areas are more process oriented, such as “Represent the United States in all civil matters for which the Department of Justice has jurisdiction,” “Promote the participation of victims and witnesses throughout each stage of criminal and juvenile justice proceedings at the Federal, State, and local levels,” and “Make effective use of information technology.” Another concern we have with some of the goals is that they are not always expressed in as measurable a form as intended by OMB guidance. For example, two of Justice’s goals in the legal representation, enforcement of federal laws, and defense of U.S. interests core function are to protect the civil rights of all Americans and safeguard America’s environment and natural resources. It is not clear from the August plan how Justice will measure its progress in achieving these goals. The Results Act and OMB Circular A-11 indicate that agency strategic plans should describe the processes the agencies will use to achieve their goals and objectives. Our review of Justice’s strategic plan, specifically the strategies and performance indicators, identified areas where the plan did not fully meet the Act’s requirements and OMB Circular A-11 guidance. programs and activities have contributed to changes in violent crime, availability and abuse of illegal drugs, espionage and terrorism, and white collar crime. Similarly, in its immigration core function, Justice has a goal to maximize deterrence to unlawful migration by reducing the incentives of unauthorized employment and assistance. It is likewise unclear how Justice will be able to determine the effect of its efforts to deter unlawful migration, as differentiated from the effect of changes in the economic and political conditions in countries from which illegal aliens originated. The plan does not address either issue. Some of Justice’s performance indicators are more output than outcome related. For example, one cited strategy for achieving the goal of ensuring border integrity is to prevent illegal entry by increasing the strength of the Border Patrol. One of the performance indicators Justice is proposing as a measure of how well the strategy is working is the percentage of time that Border Patrol agents devote to actual border control operations. While this measure may indicate whether agents are spending more time controlling the border, it is not clear how it will help Justice assess its progress in deterring unlawful migration. The Act requires that agencies’ plans discuss the types of resources (e.g., human skills, capital, and information technology) that will be needed to achieve the strategic and performance goals and OMB guidance suggests that agencies’ plans discuss any significant changes to be made in resource levels. Justice’s plan does not include either discussion. This information could be beneficial to Justice and Congress in agreeing on the goals, evaluating Justice’s progress in achieving the goals, and making resource decisions during the budget process. In its August plan, Justice added a required discussion on key external factors that could affect its plan outcomes. Justice discusses eight key external factors that could significantly affect achievement of its long-term goals. These factors include emergencies and other unpredictable events (e.g., the bombing of the Alfred P. Murrah building), changing statutory responsibilities, changing technology, and developments overseas. According to Justice, isolating the particular effects of law enforcement activity from these eight factors that affect outcomes and over which Justice has little control is extremely difficult. This component of the plan would be more helpful to decisionmakers if it included a discussion of alternatives that could reduce the potential impact of these external factors. In its August plan, Justice added a required discussion on the role program evaluation is to play in its strategic planning efforts. Justice recognizes that it has done little in the way of formal evaluations of Justice programs and states that it plans to examine its evaluation approach to better align evaluations with strategic planning efforts. The August plan identifies ongoing evaluations being performed by Justice’s components. OMB guidance suggests that this component of the plan include a general discussion of how evaluations were used to establish and revise strategic goals, and identify future planned evaluations and their general scope and time frames. Justice’s August plan does neither. Under the Results Act, Justice’s long-term strategic goals are to be linked to its annual performance plans and the day-to-day activities of its managers and staff. This linkage is to provide a basis for judging whether an agency is making progress toward achieving its long-term goals. However, Justice’s August plan does not provide such linkages. In its August plan, Justice pointed out that its fiscal year 1999 annual performance planning and budget formulation activities are to be closely linked and that both are to be driven by the goals of the strategic plan. It also said that the linkages would become more apparent as the fiscal year 1999 annual performance plan and budget request are issued. how Justice and the Department of the Treasury, which have similar responsibilities concerning the seizure and forfeiture of assets used in connection with illegal activities (e.g., money laundering) will coordinate and integrate their operations; how INS will work with the Bureau of Prisons and state prison officials to identify criminal aliens; and how INS and the Customs Service, which both inspect arriving passengers at ports of entry to determine whether they are carrying contraband and are authorized to enter the country, will coordinate their resources. Along these lines, certain program areas within Justice have similar or complementary functions that are not addressed or could be better discussed in the strategic plan. For example, both the Bureau of Prisons and INS detain individuals, but the plan does not address the interrelationship of their similar functions or prescribe comparable measures for inputs and outcomes. As a second example, the plan does not fully recognize the linkage among Justice’s investigative, prosecutorial, and incarceration responsibilities. One purpose of the Results Act is to improve the management of federal agencies. Therefore, it is particularly important that agencies develop strategies that address management challenges that threaten their ability to achieve both long-term strategic goals and this purpose of the Act. Over the years, we as well as others, including the Justice Inspector General and the National Performance Review (NPR), have addressed many management challenges that Justice faces in carrying out its mission. In addition, recent audits under the Chief Financial Officers Act of 1990 (CFO Act), expanded by the Government Management Reform Act, have revealed internal control and accounting problems. Justice’s draft strategic plan is silent on these issues. contains a new section on “Issues and Challenges in Achieving Our Goals,” which was not in its February plan. This new section discusses Justice’s process for managing its information technology investments, steps taken to provide security over its information systems, and its strategy to ensure that computer systems accommodate dates beyond the year 2000. However, neither this new section nor the “Management” core function addresses some of the specific management problems that have been identified over the years and the status of Justice’s efforts to address them. In its August draft plan, Justice also added a discussion on “accountability,” which points out that Justice has an internal control process that systematically identifies management weaknesses and vulnerabilities and specifies corrective actions. This section also recognizes the role of Justice’s Inspector General. However, the plan would be more helpful if it included a discussion of corrective actions Justice has planned for internally and externally identified management weaknesses, as well as how it plans to monitor the implementation of such actions. In addition, the plan does not address how Justice will correct significant problems identified during the Inspector General’s fiscal year 1996 financial statement audits, such as inadequate safeguarding and accounting for physical assets and weaknesses in the internal controls over data processing operations. To efficiently and effectively operate, manage, and oversee its diverse array of law enforcement-related responsibilities, Justice needs reliable data on its results and those of other law enforcement-related organizations. Further, Justice will need to rely on a variety of external data sources (e.g., state and local law enforcement agencies) to assess the impact of its plan. These data are needed so that Justice can effectively measure its progress and monitor, record, account for, summarize, and analyze crime-related data. Justice’s August strategic plan contains little discussion about its capacity to provide performance information for assessing its progress toward its goals and objectives over the next 5 years. and reliable budget, accounting, and performance data to support decisionmaking, and (2) integrating the planning, reporting and decisionmaking processes. These strategies could assist Justice in producing results-oriented reports on its financial condition and operating performance. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Department of Justice's August 1997 draft strategic plan developed in compliance with the Government Performance and Results Act of 1993, focusing on the plan's compliance with the Act's requirements and on the extent to which it covered crosscutting program activities, management challenges, and Justice's capacity to provide reliable performance information. GAO noted that: (1) Justice's plan discusses, to some degree, five of the six required elements--mission statement, goals and objectives, key external factors, a program evaluation component, and strategies to achieve the goals and objectives; (2) the plan does not include a required discussion on the relationship between Justice's long-term goals/objectives and its annual performance plans; (3) the draft plan could better address how Justice plans to: (a) coordinate with other federal, state, and local agencies that perform similar law enforcement functions, such as the Defense and State Departments regarding counter-terrorism; (b) address the many management challenges it faces in carrying out its mission, such as internal control and accounting problems; and (c) increase its capacity to provide performance information for assessing its progress in meeting the goals and objectives over the next 5 years. |
Iran is a nation of strategic importance due to its central geographic location and huge reserves of fossil fuels. Iran’s neighbors include Iraq and Afghanistan, two countries with ongoing U.S. and coalition military operations, and Pakistan and Turkey, key U.S. allies in the global war on terrorism (see fig. 1). Furthermore, Iran borders both the Persian Gulf and the Strait of Hormuz, through which roughly one-fifth of the global oil supply is exported. According to the Department of Energy, Iran has the third largest proven oil reserves in the world. Iran’s oil export revenues constitute about 80 percent of its total export revenue, and accounted for nearly one-fifth of its gross domestic product (GDP) in 2004. High oil prices in recent years have further boosted Iran’s oil export revenues. U.S.-Iranian relations have often been strained since the early years of the Cold War. Following the U.S.-supported overthrow of Iran’s prime minister in 1953, the United States and others backed the regime of Shah Mohammed Reza Pahlavi for a quarter century. Although it did much to develop the country economically, the Shah’s government repressed political dissent. In 1978, domestic turmoil swept the country as a result of religious and political opposition to the Shah’s rule, culminating in the collapse of the Shah’s government in February 1979 and the establishment of an Islamic republic led by Supreme Leader Ayatollah Khomeini. In November 1979, militant Iranian students occupied the American embassy in Tehran with the support of Khomeini. Shortly thereafter, the United States broke diplomatic relations with Iran, which remain suspended to this day. U.S. sanctions to deter Iran’s proliferation and support for terrorism fall into three categories. First, Treasury leads U.S. government efforts to implement a comprehensive trade and investment ban against Iran. Second, State is responsible for implementing several laws that sanction foreign parties engaging in proliferation or terrorism-related transactions with Iran. Third, Treasury or State can impose financial sanctions, including a freeze on assets and a prohibition on access to U.S. financial institutions, against parties who engage in proliferation or terrorism- related activities with any party, including Iran. (See app. II for more information regarding the timing and nature of U.S. and UN sanctions.) Treasury administers a ban on almost all U.S. trade or investment activity involving Iran. The prohibitions of the trade and investment ban began with a 1987 ban on Iranian imports and were followed by a 1995 ban on U.S. exports to and investment in Iran. These prohibitions apply to U.S. persons, including U.S. companies and their foreign branches, wherever located. U.S. officials stated that the ban does not apply to independent foreign subsidiaries of U.S. companies. Non-U.S. persons are generally exempt from the provisions of the ban. Trade sanctions against Iran were eased in 2000 to allow for the purchase and import from Iran of carpets and food products. Further, the Trade Sanctions Reform and Export Enhancement Act of 2000 lifted, subject to certain exceptions, U.S. sanctions on commercial sales of food, agricultural commodities, and medical products to several sanctioned countries, including Iran. The ban also prohibits U.S. financial institutions from having direct banking relationships with banks in Iran and banks owned or controlled by the government of Iran. According to a Treasury official, the trade and investment ban is aimed at making it more difficult for Iran to procure U.S. goods, services, and technology, including those that could be used for terrorism or proliferation. The official further stated that, as with all U.S. economic sanctions programs, the premise of the sanctions is to exact a price on the sanctioned entity, which serves as an inducement to change the behavior that threatens U.S. national security and foreign policy goals. Sanctions also serve to make it more difficult for a sanctioned entity to pursue its threatening conduct. Treasury’s Office of Foreign Assets Control (OFAC) administers the trade and investment ban and is responsible for reviewing and licensing requests to export or re-export goods to Iran, with most items subject to a general policy of denial. OFAC is also responsible for conducting civil investigations of sanctions violations, which can result in warning letters, cease and desist orders, and civil penalties of up to $250,000 (or an amount that is twice the amount of the transaction that is the basis for the violation) imposed administratively. We found that Iran sanctions were involved in 94 out of 425 civil penalty cases that OFAC assessed or settled as a result of sanction violations between 2003 and 2007. In cases where OFAC finds evidence of willful violations of the trade and investment ban, it may refer those cases to other federal law enforcement agencies for criminal investigation. Investigations of potential criminal violations can be conducted by the Department of Commerce’s Bureau of Industry and Security (BIS), DHS’s Immigration and Customs Enforcement (ICE), and the Department of Justice’s Federal Bureau of Investigation (FBI), sometimes acting jointly. Criminal prosecutions are pursued by the Department of Justice. Under recently enacted legislation, criminal penalties for violations of the trade and investment ban can range up to $1,000,000 and (for natural persons) 20 years in jail. According to officials at key U.S. export enforcement agencies, the trade ban may be circumvented by the transshipment of U.S. exports through third countries. Officials identified several locations that serve as common transshipment points for goods destined for Iran. These locations include Germany, Malaysia, Singapore, the United Kingdom, and, according to Commerce officials, the United Arab Emirates (UAE) in particular. Two trends underscore the possibility that U.S. goods are being shipped to Iran through the UAE. First is the considerable growth in U.S. trade flows through the UAE. The United States has become the number one supplier of imports to the UAE and Iran is the UAE’s largest trade partner. Moreover, although trade statistics do not specify the portion of UAE exports to Iran that are of U.S.-origin, the UAE transships a higher proportion of its U.S. imports than other countries do. According to Commerce officials transshipments have been a considerable problem in terms of the effectiveness of sanctions in place against Iran. The second trend is the high rate of unfavorable end-use checks for U.S. items exported to the UAE. The Department of Commerce relies on post- shipment verification (PSV) checks as its primary method of detecting and preventing illegal transfers, including transshipments, of U.S.-origin exports to Iran. However, according to Commerce officials, in August 2007, the UAE enacted a comprehensive export, reexport, and transshipment control law to better enable the UAE to control transshipment of sensitive goods through its ports. The law is too new to assess its effectiveness. (Further information is classified.) Congress has taken steps to discourage trade by third-country parties with Iran by enacting sanction laws that have a “secondary boycott” effect. Three U.S. sanction laws discourage foreign parties from engaging in proliferation or terrorism-related activities with Iran (see table 1). State leads efforts to implement these laws and has imposed sanctions under these laws to varying degrees. As table 1 shows, State has imposed sanctions against foreign parties, including bans on U.S. government procurement opportunities and sales of defense-related items, in 111 Iran-specific cases since 2000 under a law currently known as the Iran, North Korea, and Syria Nonproliferation Act (INKSNA). This law targets foreign persons that have transferred goods, services, or technology to Iran that are listed on various multilateral export control lists. According to a State official, entities engaged in conventional arms transfers were the most widely sanctioned, followed by those involved in chemical-biological, missile, and nuclear activities. Since 2000, almost half of the cases (52) involved Chinese parties, with North Korean and Russian parties accounting for 9 and 7 cases, respectively. In 2007, Syrian parties were sanctioned in 8 cases. According to State officials, in most cases, the full range of sanctions authorized under INKSNA is imposed, and sanctions have been typically imposed for a 2- year period. Over 30 percent of all sanction cases involve parties that were sanctioned multiple times under the law—some, primarily Chinese firms, 3 or more times. According to a State official, such instances were the result of new proliferation activities by these firms. Because the law establishes the sanctions that are available, the practical effect of continuing to impose sanctions against the same parties is to extend the length of time the sanctions are imposed and make the public aware of the firms facilitating proliferation with Iran. State officials said that generally no consideration of additional penalties or measures is given to parties who have been sanctioned multiple times, although some of these entities have been sanctioned under other sanction tools. However, State officials emphasized that they raise concerns about the activities of such entities with foreign governments as appropriate. In deciding to sanction an entity under INKSNA, State officials reported that every 6 months they assess as many as 60,000 intelligence reports to identify transfer cases that should be submitted to agencies for review. State decides, on a discretionary basis, which parties to sanction following a meeting chaired by the NSC that solicits input from DOD, Energy, and Treasury and other agencies regarding the disposition of each case. According to a State official, the Deputy Assistant Secretary-level interagency group reviews cases to recommend whether the foreign persons were reportable under the act, and if so, (1) whether there was information establishing that a case was exempt from sanctions under the act, (2) whether to seek from the foreign person additional information concerning the transfer or acquisition as provided for in the act, and (3) whether sanctions under the act should be applied. The final decision regarding the disposition of each case is made by the Deputy Secretary of State. One State official noted that there have been several cases in which State decided not to impose sanctions because of positive nonproliferation actions taken by the foreign government responsible for the firm engaging in the proliferation transfer. A foreign government punishing or prosecuting the firm responsible for the transfer is one example of the type of positive action that has resulted in a decision not to impose penalties. Another reason why State may decide not to impose sanctions is a concern that such an action, which is made public, may compromise the intelligence “sources and methods” used to collect information on a particular proliferation case. Once final decisions are made, State then submits a classified report to Congress identifying parties that have engaged in sanctionable activities and parties that will be sanctioned, and ultimately publishes the names of sanctioned entities in the Federal Register. (See appendix III for a detailed listing of these sanction cases.) Under a second law, the Iran-Iraq Arms Nonproliferation Act of 1992 (also shown in table 1), State has imposed sanctions 12 times. Under this act, mandatory sanctions include prohibiting the export to Iran of all goods specified on the Commerce Control List (CCL). State also can impose sanctions against foreign parties, such as a ban on U.S. government procurement opportunities or export licenses that knowingly and materially contribute to Iran’s efforts to acquire destabilizing numbers and types of advanced conventional weapons. As with the Iran, North Korea, and Syria Nonproliferation Act, decisions under this act include interagency input from Commerce, Energy, and DOD, with State in the lead and responsible for deciding which cases warrant imposition of sanctions. In 2002, State imposed sanctions in 10 instances, 9 of which were against Chinese parties. In 2003, sanctions were imposed against two parties, one Jordanian and one Indian. No sanctions have been imposed since 2003 primarily because, according to State officials, it is difficult to establish that transfers were made by parties who knowingly and materially contributed to Iran’s proliferation. Table 1 shows that State has not imposed sanctions against any party under a third law—the Iran Sanctions Act—though State officials noted that the law has been useful in raising U.S. concerns over Iran. The goal of the Iran Sanctions Act (previously known as the Iran-Libya Sanctions Act of 1996, or ILSA) has been to deny Iran the financial resources to support international terrorism or the development of weapons of mass destruction (WMD) by limiting Iran’s ability to find, extract, refine, or transport its oil resources. State considered sanctions on one occasion in 1998; however, the U.S. government granted waivers to the parties involved. In that instance, the U.S. government determined that the investments of three foreign companies—Total (France), Gazprom (Russia), and Petronas (Malaysia)–in the development of Iran’s South Pars gas field were sanctionable under ILSA. However, the Secretary of State determined that it was important to the U.S. national interest to waive the imposition of sanctions against these firms. In making this determination, the Secretary considered factors such as the desire to build an effective multilateral regime to deny Iran the ability to acquire WMD and support acts of international terrorism. Further, the European Union (EU) had concerns that the use of the act to impose sanctions would constitute extraterritorial application of U.S. law. The possibility that the EU might take this issue to the World Trade Organization for resolution played a role in convincing the U.S. government to waive sanctions. In addition, a report on the use of ILSA prepared by State and cleared by the NSC noted that the sanctions that could be imposed were unlikely to induce the three companies to abandon their investments because the companies were insulated from any practical negative impact of the sanctions. The U.S. government has taken actions against Iran using targeted financial sanctions that can be used against any party that engages in certain proliferation or terrorism activities. In June 2005, the President issued Executive Order 13382 to freeze the assets of persons engaged in proliferation of WMD and members of their support networks. This action followed the issuance in September 2001 of Executive Order 13224 to freeze the assets of persons who commit, threaten to commit, or support terrorism. Executive Orders 13382 and 13224 were both issued under the authority of the International Economic Emergency Powers Act (IEEPA). Persons targeted under these financial sanctions are said to be “designated” as either WMD proliferators or global terrorists, depending on which set of sanctions is employed, and any transactions with them by U.S. persons are prohibited. According to Treasury, the goal of this action is to deny sanctioned parties’ access to the U.S. financial and commercial systems. Treasury or State can make designations under these financial sanctions, which are published in the Federal Register. As of October 25, 2007, 53 of the 70 parties designated under the nonproliferation financial sanctions were tied to Iranian proliferation activities. Of these 53 parties, 48 were either Iranian entities or overseas subsidiaries of Iranian banks, 4 were Chinese, and 1 was American. Several designations have been made in recent months. For example, in June 2007, Treasury designated four Iranian companies for their role in Iran’s proliferation of WMD. On October 25, 2007, State and Treasury designated 27 entities or individuals under Executive Order 13382, including the Islamic Revolutionary Guard Corps (IRGC) and other companies or individuals affiliated with the IRGC, the Ministry of Defense and Armed Forces Logistics, and two Iranian banks, including Bank Melli—Iran's largest bank. With regard to the antiterrorism financial sanctions, Treasury was unable to provide us with data on the number of Iran-related designations because it does not compile information about the country or countries with which the designated entities are involved. We were, however, able to identify instances where antiterrorism financial sanctions were imposed. For example, on October 25, 2007, under Executive Order 13224, Treasury designated the IRGC’s Qods Force a supporter of terrorism. According to Treasury, the Qods Force provides material support to the Taliban, Lebanese Hizbollah, Hamas, and other terrorist groups. Treasury also designated Iran's Bank Saderat, which is already subject to financial restrictions under the trade ban, as a terrorism financier. U.S. officials and experts report that U.S. sanctions are having specific impacts on Iran; however, the extent of such impacts is difficult to determine, and agencies have not assessed the overall impact of sanctions. First, U.S. officials report that U.S. sanctions have slowed foreign investment in Iran’s petroleum sector, which hinders Iran’s ability to fund proliferation and terrorism-related activities. Second, financial sanctions deny parties involved in Iran’s proliferation and terrorism activities access to the U.S. financial system and complicate their support for such activities. Third, U.S. officials have identified broad impacts of sanctions, such as providing a clear statement of U.S. concerns about Iran. However, other evidence raises questions about the extent of reported economic impacts. Since 2003, the Iranian government has signed contracts reported at approximately $20 billion with foreign firms to develop its energy resources, though it is uncertain whether these contracts will ultimately be carried out. In addition, sanctioned Iranian banks may be able to turn to other financial sources or fund their activities in currencies other than the U.S. dollar. U.S. and international reports also find that Iran continues proliferation activities and support for terrorism. Finally, U.S. agencies, except for Treasury’s assessments of its financial sanctions under Executive Orders 13382 and 13224, do not assess the impact of sanctions in helping achieve U.S. objectives nor collect data demonstrating the direct results of their sanctioning and enforcement actions. State and Treasury officials report that sanctions have had specific impacts such as delaying foreign investment in Iran’s petroleum sector and reducing Iran’s access to the U.S. financial system. In addition, broad impacts of sanctions, such as their symbolic value, also have been recognized. U.S. officials and experts have stated that U.S. sanctions have played a role in slowing Iran’s progress in developing its oil and gas resources. The Iran Sanctions Act is intended to limit investment in Iran’s petroleum sector, with an expectation that curbing such investment would disrupt the revenue generated by new oil and gas investments and reduce Iran’s ability to pursue policies that the United States deemed unacceptable. A 2004 State Department report noted that the law had, among other things, helped delay investment in Iran’s petroleum sector. According to State Department officials, there have been no new final oil and gas investment deals in Iran since 2004. Other experts have similarly noted a slowdown in investment in Iran’s oil and gas sectors and have cited statements that Iranian oil officials had made to that effect. U.S. officials and experts have also noted that, while the existence of the Iran Sanctions Act and its use as a tool for dialogue with foreign parties may be a contributing factor to a slowdown in foreign investment in Iran, Iran’s own investment policies may be contributing to a reduced flow of investment. On the other hand, the Department of State has raised concerns about possible energy deals between Iran and potential foreign investors, including the reported $16 billion China National Offshore Oil Corporation deal for the development of Iran’s North Pars gas field. Further, the United States has expressed concerns about the estimated $4.3 billion preliminary agreement that Royal Dutch Shell, along with Spain’s Repsol, concluded with the Iranian regime for the construction of a liquefied natural gas plant at South Pars, the world’s largest natural gas field. Also, Indian firms have entered into contracts in recent years for the purchase of Iranian gas and oil. The proposed construction of a pipeline to deliver Iranian natural gas to India through Pakistan is a project about which the United States has expressed concerns. We also found that since 2003 the Iranian government has signed contracts reported at approximately $20 billion with foreign firms to develop Iran’s energy resources. It is uncertain whether these contracts will ultimately be carried out, and at least one has already been withdrawn. However, these agreements demonstrate foreign firms’ significant interest in financing or underwriting projects in Iran’s energy sector. (See app. IV for a listing of recent major agreements between Iran and foreign investors in Iran’s energy sector.) State and Treasury officials have testified that financial sanctions deny designated individuals and entities access to the funds needed to sustain Iran’s proliferation. For example, in January 2007, the U.S. government designated Bank Sepah under Executive Order 13382 as a supporter of WMD proliferation, thereby eliminating its access to the U.S. financial system and reducing its ability to conduct dollar transactions. Further, U.S. financial sanctions also have reportedly disrupted Iran’s support for terrorism. U.S. officials report that the United States has disrupted Hizbollah’s financial support network by reducing the ability of Iranian banks to interact with the U.S. financial system. For example, in September 2006, Treasury altered the trade ban regulations to cut off Bank Saderat, Iran’s second largest state-owned bank, from dollar transactions due to its support for terrorism. Treasury officials reported that Iran used Bank Saderat to move millions of dollars to terrorist organizations such as Hizbollah, Hamas, and the Palestinian Islamic Jihad. This action complicated the bank’s financial transactions and alerted the world’s financial community to Bank Saderat’s role in funding terrorism. However, Iran may be able to find alternative financial sources or fund its activities in currencies other than the dollar. Treasury officials have noted that sanctioned parties often find “workarounds” to lessen the sanctions impact, and other financial options can be used. For example, sanctioned Iranian banks may turn to euro or other currency transactions to support Iranian government activities. Further, in 2006, a Treasury official testified that stopping money flows to Iran is particularly challenging because the Iranian government draws upon a large network of state-owned banks and parastatal companies that is difficult to penetrate. State and Treasury officials further reported that the effects of U.S. financial sanctions have been augmented because several large European banks, responding to U.S. diplomatic efforts, have curtailed their business with sanctioned Iranian entities and are refraining from conducting dollar transactions with Iran. At least 7 of the banks that have limited or ended their dealings with sanctioned Iranian entities rank among the 20 largest European banks. U.S officials also report that a number of governments, including France, Germany, Italy, and Japan, are beginning to reduce their export credits for goods shipped to Iran. U.S. officials have contended that such developments have made it increasingly difficult for Iran to execute important financial transactions necessary for Iran’s domestic energy and other projects. U.S. agency officials and experts also have cited the increased costs to Iran of obtaining finance and goods, sometimes resulting in inferior component parts. State officials assert that as more countries limit their financial interactions with Iranian entities and individuals engaging in suspect activities, these parties have been denied access to major financial and commercial systems. U.S. officials and sanction experts state that sanctions have other broad impacts. For example, State officials stressed that U.S. sanctions serve as a clear symbolic statement to the rest of the world of U.S. concerns regarding Iran’s proliferation and terrorism-related activities. State officials also noted that sanction laws can be used as a vehicle for dialogue with foreign companies or countries, and the prospect of sanctions can encourage foreign parties to end their interactions with Iran. Finally, U.S. officials have stated that publicly identifying entities and listing them in the Federal Register may deter other firms from engaging in business with sanctioned entities. The extent of the sanctions’ impact in deterring Iran from proliferation activities, acquiring advanced weapons technology, and support for terrorism is unclear. Although Iran halted its nuclear weapons program, it continues to enrich uranium, acquire advanced weapons, and support terrorism. According to the November 2007 U.S. National Intelligence Estimate, Iran halted its nuclear weapons program in the fall of 2003. According to the estimate, Iranian military entities were working under government direction to develop nuclear weapons. However, Iran halted the program because of international scrutiny and pressure resulting from exposure of Iran’s previously undeclared nuclear activities. (See app. II for a timeline of UN and international actions with regard to Iran’s enrichment activities.) Although it has halted its nuclear weapons program, Iran continues its uranium enrichment program. While enriched uranium can be used for nuclear weapons, Iran has stated that its program is for peaceful civilian purposes. The Director General of the International Atomic Energy Agency (IAEA) stated on September 17, 2007, that Iran had not suspended its enrichment activities and continued to build its heavy water reactor at Arak. This announcement followed a series of IAEA discoveries about Iran’s nuclear program. In 2002, the IAEA was informed of a previously undeclared nuclear enrichment plant in Natanz and a heavy water plant in Arak. Subsequent IAEA inspections revealed that Iran had made significant progress toward mastering the technology to make enriched uranium. Iran also continues to acquire advanced weapons technology, including ballistic missile technology, according to Treasury. According to State officials, Chinese entities supply certain dual-use items to Iran, including some that U.S. officials believe could be used in support of Iran’s WMD, ballistic and cruise missiles, or advanced conventional weapons programs. The U.S. government also reports that Iran continues to support terrorism. We have reported that Iran is one of several countries from which Islamic extremism is currently being propagated. In addition, according to State’s 2006 Country Report on Terrorism, Iran continues to be an active state sponsor of terrorism. The report states that the IRGC and Ministry of Intelligence and Security influence Palestinian groups in Syria and the Lebanese Hizbollah to use terrorism in pursuit of their goals. The report also noted that Iran provided guidance and training to select Iraqi Shi’a political groups and weapons and training to Shi’a militant groups to enable anticoalition attacks. In July 2007, officials of U.S. intelligence agencies testified that Iran regards its ability to conduct terrorism operations as a key element of its national security strategy. U.S. agencies do not assess the overall impact of sanctions in deterring Iran’s proliferation, acquisition of advanced weapons technology, or terrorism-related activities, noting the difficulty of isolating the impact of sanctions from all other factors that influence Iran’s behavior. In addition, except for Treasury assessments of financial sanctions, agencies do not possess data on the direct results of sanctions, such as the types of goods seized that violate the trade ban or the subsequent behavior of parties that sell prohibited goods to Iran. State, Treasury, and Commerce officials said that they do not measure the overall impact of sanctions they implement. For example, both State and Treasury officials emphasized that, with one exception regarding one sanction law, they have not attempted to measure the ability of U.S. sanctions to deter Iran’s proliferation or terrorism-related activities. State officials stated it is not possible to isolate the impact of sanctions from all other factors that influence Iran’s behavior, such as the actions of other countries. Further, State officials reported that sanctions are just one component of U.S. efforts to influence Iran’s behavior. Treasury officials conduct classified assessments of entities designated under Executive Orders 13382 and 13224, but report that they do not assess the overall impact of sanctions, stating it can be difficult to differentiate the impact of various U.S. efforts. For example, it is difficult to know where the effects of U.S. diplomacy end and the effects of U.S. sanctions begin. State and Treasury officials noted that, while the goal of sanctions is to change Iran’s behavior, such changes take time, and it is not possible to track how sanctions imposed today might affect overall behavior in the future. Such an exercise would be extremely difficult due to the challenges associated with establishing any causal linkage between U.S. sanctions and Iran’s subsequent behavior. In addition, agency officials noted that the sanctions targeting Iran do not constitute a separate program or line of effort; thus, these activities are not monitored or assessed separately. However, according to Treasury officials, sanctions implemented by OFAC constitute a separate program with its own set of regulations (the Iranian Transaction Regulations) and OFAC does focus specific effort on Iran sanctions. Finally, Treasury and Commerce officials stated that it would be difficult to measure either the deterrent impact of sanctions or, conversely, the extent to which illegal or sanctionable activities continue undetected. In 2004, State completed a review of the Iran Sanctions Act (then known as the Iran-Libya Sanctions Act). The ILSA Extension Act of 2001 required the President to provide Congress with a report describing the extent to which the act had been effective in denying Iran the ability to support acts of international terrorism and fund the acquisition of WMD by limiting Iran’s ability to develop its petroleum resources. This report stated that actions taken pursuant to the act had a “modest positive impact.” The 2004 report is the only formal assessment U.S. agencies have completed on the broad impact of sanctions against Iran. In addition, agency officials do not possess data on the direct results of sanctions. For example, regarding the trade ban, officials from DHS’s Customs and Border Protection reported that inspectors are not required to document whether or not a given seizure is related to the ban. As a result, they are unable to provide complete data on the volume or nature of goods seized that violate this ban. Further, although Treasury posts on the Internet its OFAC administrative penalties, it does not compile information regarding the number of cases that involve violations of Iran sanctions (we were able to identify such cases after reviewing more than 400 detailed case descriptions) and the nature of such violations. FBI officials said that, within their counterintelligence division, they classify investigations by country of origin but would not be able to distinguish cases involving Iran sanctions from other Iran-related cases because the bureau’s automated data systems do not include such information. In addition, complete DHS/ICE data on Iran sanctions cases are not available because ICE agents are not required to document the country of destination when opening a case, nor is this information always subsequently added as the case progresses. Further, a Justice official stated that the department prosecutes and organizes its cases by statute and does not classify its cases by the specific country or nationality of the individual involved in its data system. It is thus not possible to identify cases specific to the trade and investment ban with Iran. In addition, although agencies cite transshipment as a key means of evading the trade ban, they do not collect data that would help illustrate the magnitude of the problem. Further, State does not review whether sanctions imposed under the law currently known as the Iran, North Korea, and Syria Nonproliferation Act—the law used most frequently to sanction foreign parties—stop sanctioned parties from engaging in proliferation activities with Iran or are relevant for these parties. The law does not require such a review. State officials said that, while they are aware of instances where proliferation activities ended following the imposition of sanctions on particular firms, such information is primarily collected on an anecdotal basis. There has been no overall or systematic review of whether sanctioned entities ended their proliferation activities, though State officials indicated that they monitor the activities of sanctioned parties as part of their daily responsibilities. Further, these officials emphasized that State must apply the sanctions established by law, such as a prohibition on participating in U.S. government procurement opportunities, regardless of their relevance or potential impact. State officials acknowledged the likelihood that the sanctions established by law may have limited relevance for sanctioned parties, which may be illustrated in cases where the same parties are sanctioned repeatedly for proliferation activities with Iran. In addition, OFAC does not compile data on the value of assets frozen pursuant to targeted financial sanctions. OFAC tracks information on assets frozen in the aggregate, not by the amount of assets frozen for each particular party that is sanctioned. OFAC also did not have information regarding the number of parties sanctioned under Iran-related antiterrorism financial sanctions. According to OFAC, systematically tracking these data and information is not a useful measure of the efficacy of sanctions. Iran’s global trade ties and leading role in energy production make it difficult for the United States to isolate Iran and deter its acquisition of advanced weapons technology and support for terrorism. First, Iran’s trade with the world—both imports and exports—has grown since the U.S. trade ban began in 1987. Although trade has fluctuated from year to year, most of the growth has occurred since 2002, coinciding with the rise in oil prices. This trade includes imports of weapons and nuclear technology. Second, global interest in purchasing and developing Iran’s substantial petroleum reserves has kept Iran active in global commerce. Iran’s integration in the world economy has complicated U.S. efforts to encourage other countries to isolate Iran; however, multilateral efforts targeting Iran have recently begun. Beginning in December 2006, the UNSC adopted sanctions against Iran in response to Iran’s noncompliance with its international obligations. These sanctions are still being implemented. Over the past 20 years, U.S. trade with Iran has decreased, but Iran’s trade with the rest of the world has increased, in large part due to increases in oil prices between 2002 and 2006. Asian countries, particularly China, are increasing their trade with Iran. Countries such as China and Russia continue to provide Iran with sensitive goods. U.S. trade with Iran declined sharply immediately following the adoption of both the 1987 U.S. ban on imports from Iran and the 1995 ban on U.S. exports to and investment in Iran. However, U.S exports to Iran rebounded to some degree when the sanctions were eased in 2000. Before the 1987 U.S. import ban, 16 percent of Iran’s total exports, primarily oil, were shipped to the United States. Following the ban, this share dropped to about .1 percent. According to our analysis of U.S. trade data, Iran exported $2 billion in goods to the United States in 1987, about $10 million in 1988, and less than $1 million annually for most of the 1990s. Further, 2 percent of Iran’s imports were from the United States before the export ban in 1995; this dropped to almost zero after the ban. Total U.S. exports to Iran declined from about $282 million in 1995 to less than $400,000 in 1996. By 2000, however, total U.S.-Iran trade had increased to about $218 million, largely as a result of the relaxation of the sanctions in that year to allow for the purchase and import of Iranian carpets. In 2006, total U.S.- Iran trade was $247 million. Our analysis of U.S. trade data indicates that both the export and import declines coincided with significant changes in the types of goods traded. For example, the top U.S. exports to Iran prior to the 1995 export ban were in the UN trade category “nuclear reactors, boilers, and machinery,” while the top exports immediately following the ban were in the category “printed books and other informational materials.” In 2006, the top U.S. exports to Iran were pharmaceuticals and tobacco products. The top U.S. import from Iran before the 1987 import ban was oil, whereas the top import immediately following the ban—and also in 2006—was carpets and other textile floor coverings. Despite the ban of Iranian imports to the United States in 1987 and the ban on U.S. exports to Iran in 1995, Iran’s overall trade has grown. From 1987 through 2006, Iran’s exports grew from $8.5 billion to $70 billion, while Iran’s imports grew from $7 billion to $46 billion (see fig. 2). The annual real growth in Iran’s exports between 1987 and 2006 was nearly 9 percent; however, the export growth rate between 2002 and 2006 was 19 percent, reflecting the steep rise in oil prices since 2002 (see fig. 2). Iran’s imports grew at an average annual rate of about 7 percent between 1987 and 2006. Iran’s exports and imports both fluctuated during this period. For example, Iran’s imports increased significantly following the end of the Iran-Iraq war in 1988, followed by steep declines from 1993 to 1994, following Iran’s major currency devaluation (over 1,800 percent). Likewise, Iran’s exports fluctuated. The growth in Iran’s exports from 1989 to 1993 was followed by a general decline through 1998. Exports grew dramatically from 2002 to 2006, coinciding with the rise in the price of oil from $25 to $65 a barrel. The overall growth in Iran’s trade from 1986 to 2006 demonstrates the limits of the U.S. trade ban to isolate Iran and pressure it to reduce its proliferation activities and support for terrorism. Figure 2 shows that in the year following the 1987 U.S. ban on Iranian imports, Iran’s exports to the world did not decline. In fact, Iran’s exports began growing dramatically in 1989. In the 2 years following the 1995 ban on U.S. exports to Iran, Iran’s imports from the world grew, and have generally continued to grow. Iran has been able to readily replace the loss in U.S. trade through trade with other countries, and the total value of Iranian imports and exports has continued to grow largely uninterrupted. In addition to the overall growth of Iran’s trade since 1987, Iran has extensive global trade ties with Europe and the developing world. In particular, trade with Asian countries has nearly doubled since 1994. Asian countries accounted for 30 percent of Iran’s exports in 2006, up from about 16 percent in 1994. Iran’s exports to China increased from about 1 percent in 1994 to about 13 percent in 2006. Japan and China were the top two recipients of exports from Iran, together accounting for more than one-quarter of Iran’s exports in 2006 (see table 2). Iran’s growing trade with China has played a large role in replacing the declining share of EU countries’ trade with Iran over the past decade and contributing to Iran’s growing trade with Asian countries. In 2006, the EU accounted for nearly one-quarter of Iran’s exports to the world, down from 33 percent in 1994. Germany and the United Kingdom were part of this decline. From 1994 to 2006, Iran’s exports to Germany declined from about 6 percent to less than 1 percent and from about 9 percent to less than 1 percent for the United Kingdom. In 2006, Germany and China were Iran’s largest providers of imports, accounting for 23 percent of Iran’s imports. Although Germany has remained the largest supplier of imports to Iran for over a decade, its share of Iran’s imports has declined from about 19 percent in 1994 to 12 percent in 2006, while Iran’s imports from China increased from about 1 percent in 1994 to about 11 percent in 2006 (see table 3). Iran increased its imports from Middle East countries from about 8 percent to 13 percent, with UAE’s share increasing from over 5 percent in 1994 to about 9 percent in 2006. A regional shift in Iran’s import suppliers also took place between 1994 and 2006. The EU’s share of Iran’s imports from the world declined from 50.5 percent in 1994 to slightly over one-third of Iran’s imports in 2006, while Asian countries’ share has tripled, from 9 percent to 27 percent. Other countries’ exports to Iran include dual-use or sensitive goods, such as arms, aircraft, and nuclear equipment and technology–goods that the U.S. statutorily prohibits from export to Iran. For example, according to UN trade data, Russia and Spain exported $28.9 million of nuclear reactor parts from 2004 to 2005, over 89 percent from Russia. Iran also acquired spare parts to U.S.-made fighter jets, parts that were sold to other countries as surplus. According to State officials, Chinese entities supply certain dual-use items to Iran, including some that U.S. officials believe could be used in support of Iran’s WMD, ballistic missile, cruise missiles, or advanced conventional weapons programs. According to a CRS report and the testimony from U.S. intelligence agencies, Iran is becoming self- sufficient in the production of ballistic missiles, largely with foreign help. Iran is also an important customer for Russia’s weapons and civil nuclear technology. Additional information detailing Iran’s purchases of weapons and nuclear technology is classified. Demand for Iranian crude oil, coupled with high oil prices, helps support Iran’s economy and limits the effects of the U.S. trade ban. Iran is a prominent world oil producer, and its economy relies heavily on oil export revenues. Iran ranked fourth in terms of world oil production and exports in 2005, exporting about 2.6 million barrels of oil per day. Iran has the third largest proven oil reserves and the second largest reserves of natural gas worldwide, according to the Oil and Gas Journal. Oil export revenues represent nearly 80 percent of Iran’s total merchandise export earnings and accounted for about 19 percent of Iran’s GDP in 2004. In 2005, Japan and China accounted for 27 and 14 percent of Iran’s crude oil exports, respectively, as shown in table 4. Given the strong demand for Iranian crude oil, bolstered by continuing support for Iran’s non-oil exports, several private sector and U.S. economic experts stated that Iran’s near-term growth prospects look favorable. However, a sharp drop in oil prices is a risk, and, according to the IMF, a further escalation of tensions associated with nuclear issues would adversely affect investment and growth. Another concern is Iran’s growing gasoline consumption, which is heavily subsidized by the government. According to the Department of Energy, Iran is the second largest importer of gasoline in the world after the United States and has a shortage of refining capacity to produce gasoline. In 2006, as part of the Iranian government’s effort to reduce the subsidy on gasoline, the government raised the price of gasoline 25 percent and introduced “smart cards” in an effort to deter gas smuggling, reduce gasoline shortages, and improve the budget situation. In addition, according to economic experts, Iran has benefited from strong growth in non-oil exports in recent years. Non-oil exports increase the resiliency of Iran’s economy and mitigate its vulnerability to falling oil prices, as well as provide jobs. As part of the government’s policy to move away from crude oil exports, Iran is expanding its petrochemical production capacity and moving toward export of petrochemical products. Multilateral efforts targeting Iran resulted in the imposition of UN sanctions in 2006 as a result of concerns that Iran’s nuclear program might contain a weapons-related component. In July 2006, UNSC resolution 1696 demanded that Iran suspend its uranium enrichment program by August 2006 or face possible sanctions. Iran did not suspend these activities, and in December 2006, the UNSC unanimously approved UNSC resolution 1737. This resolution prohibits UN member states from supplying Iran with the nuclear and missile-related materials or technology specified in the resolution, as well as any other items that would contribute to proliferation-sensitive nuclear activities or the development of nuclear weapon delivery systems. In addition, UN member states are required to freeze the financial assets and other economic resources of individuals and entities designated by the UNSC as having ties to Iran’s nuclear or ballistic missile programs. Further, the resolution provides for a ban on the provision of financial services related to the supply, sale, manufacture, transfer or use of prohibited items specified in the resolution. Iran was required to suspend its enrichment-related, reprocessing, and heavy water- related activities and cooperate fully with the IAEA by February 2007 or face possible additional sanctions. The UNSC imposed further sanctions on Iran after the IAEA found that it did not suspend its enrichment or heavy water-related activities. In March 2007, the UNSC passed resolution 1747, which banned arms exports from Iran; called upon all UN member states to exercise restraint in sales to Iran of certain categories of heavy conventional arms; designated additional individuals and entities, including Bank Sepah and those affiliated with the IRGC, as subject to the asset freeze requirement; and urged UN member states and international financial institutions not to enter into new commitments for financial assistance to the government of Iran, except for humanitarian and developmental purposes. Resolution 1747 reaffirmed Iran’s obligation to suspend its enrichment, reprocessing, and heavy water- related activities and affirmed UNSC intentions to consider additional sanctions should Iran fail to comply by May 2007. The IAEA Director General confirmed Iran’s failure to comply in its report of May 2007. State officials noted that this report triggered ongoing consultations among six countries regarding next steps, including the possible adoption of additional sanctions. UNSC resolution 1737 established a sanctions committee charged with monitoring implementation by UN member states of the measures imposed under the resolution, including by reviewing required country compliance reports. The State Department reported that, as of August 2007, the UNSC 1737 Sanctions Committee had received reports from 82 UN member countries (43 percent) on resolution 1737 and reports from 64 UN member countries (33 percent) on resolution 1747. U.S. officials have stated that UN sanctions enhance the international credibility of U.S. sanctions and provide leverage to increase pressure on Iran. State officials have noted that multilateral sanctions enhance the potential effectiveness of U.S. sanctions. Since UN sanctions have been in place for about a year, it is difficult to assess their impact. For the past 20 years, U.S. sanctions against Iran have been an important element of U.S. policy to deter Iran from weapons proliferation and support for terrorism. Congress is considering additional sanctions targeting Iran. UN sanctions may also play an important role in pressuring Iran, but these sanctions have not yet been fully implemented. However, the overall impact of sanctions, and the extent to which these sanctions further U.S. objectives, is unclear. On the other hand, some evidence, such as foreign firms signing contracts to invest in Iran’s energy sector and Iran’s continued proliferation efforts, raise questions about the extent of the sanctions’ impact. Moreover, U.S. agencies do not systematically collect information on the direct results of the multiple sanctions they implement, or their data do not provide specific information on Iran sanctions. These agencies have not conducted a baseline assessment of the impact of the sanctions. Collecting data on the results of multiple sanctions against Iran and conducting an overall baseline assessment is challenging, given all the agencies involved and the complexities of collecting some of the necessary information. However, without an overall assessment of the sanctions’ impact and subsequent reviews on a periodic basis, the Congress and the Administration will continue to lack important information for developing effective strategies to influence Iran’s behavior. Congress and the Administration need a better understanding of the impact of U.S. sanctions against Iran and the extent to which sanctions are achieving U.S. foreign policy objectives. The Administration needs to take a series of actions to first improve the disparate data collected on Iran sanctions and then establish baseline information for the continuous monitoring and periodic reporting on what U.S. sanctions have achieved. Accordingly, we recommend that Congress consider requiring the NSC, in collaboration with the Departments of State, the Treasury, Energy, and Commerce; the intelligence community; and U.S. enforcement agencies to take the following actions: (1) collect, analyze, and improve data on U.S. agencies’ actions to enforce sanctions against Iran and complete an overall baseline assessment of the impact and use of U.S. sanctions, including factors that impair or strengthen them. This assessment should collect information, to the extent feasible, from various U.S. agencies and consider factors such as, but not limited to, the following: the number of goods seized, penalties imposed, and convictions obtained under the trade ban (Homeland Security, Treasury, Commerce, Justice); sensitive items diverted to Iran through transshipment points (Commerce and the intelligence community); the extent to which repeat foreign violators of Iran-specific sanctions laws have ended their sales of sensitive items to Iran (State and intelligence community); the amount of assets frozen resulting from financial sanctions (Treasury and the intelligence community); and the extent of delays in foreign investment in Iran’s energy sector (State, Energy, and the intelligence community). (2) develop a framework for assessing the ongoing impact of U.S. sanctions, taking into account any data gaps that were identified as part of the baseline assessment , and the contribution of multilateral sanctions. (3) report periodically to the Congress on the impact of sanctions against Iran in achieving U.S. foreign policy objectives. We provided a draft of this report to the Departments of State, the Treasury, Commerce, Defense, Energy, Justice, and Homeland Security. We also provided a draft to the NSC and the Office of the Director of National Intelligence (ODNI). The Department of the Treasury provided a formal response emphasizing that, as a result of financial pressure, Iran is experiencing increasing isolation from the global community. The department’s response also states that Iran continues to pursue nuclear capabilities and ballistic missile technology and to fund terrorism. This comment reinforces our finding that the overall impact of sanctions is unclear. In addition, the Treasury noted its assessments of the effectiveness of financial sanctions. We revised the report to recognize that Treasury assesses the impact of financial sanctions but maintain that an overall impact assessment of all U.S. sanctions has not been undertaken. Finally, Treasury commented that the amount of assets blocked under available financial sanctions is not a measure of the program’s value. The department also noted that other sanction effects, such as the inability of designated parties to use the U.S. financial system or the reputational harm that stems from a designation, can often be the primary way sanctions have an international impact. We have noted the broad positive benefits of sanctions in our report. Treasury also told us in an earlier communication that it did not disagree with the part of our Matter for Congressional Consideration calling for an assessment of assets frozen using these financial tools. Treasury’s letter can be found in appendix V. The Departments of State, the Treasury, Commerce, and Energy provided written technical comments. We incorporated these comments into the report as appropriate. The Department of Commerce submitted its technical comments in a letter that is included in appendix VI. The NSC provided brief oral comments and ODNI provided a classified response; we considered this information and revised the report as appropriate. The Departments of Defense, Justice, and Homeland Security provided no comments on the draft report, though Homeland Security supported the part of our Matter for Congressional Consideration that specifically involves the department. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to other congressional offices as well as the Departments of State, the Treasury, Commerce, Defense, Energy, Justice, and Homeland Security. Further, we will provide copies to the NSC and the Office of the Director of National Intelligence. We will also make copies available to others on request. In addition, this report will be available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8979 or at christoffj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and major contributors are listed in appendix VII. The Ranking Member of the House Subcommittee on National Security and Foreign Affairs of the Committee on Oversight and Governmental Reform requested that we review U.S. sanctions involving Iran. This report addresses (1) U.S. sanctions targeting Iran and their implementation, (2) the reported impact of the sanctions, and (3) factors that limit the ability of U.S. sanctions to reduce Iran’s proliferation and terrorism-related activities. To identify U.S. sanctions targeting Iran and determine the U.S. efforts to implement and assess sanctions against Iran, we first identified, reviewed, and summarized U.S. executive orders and laws that establish sanctions and are targeted at Iran. While we focused on Iran-specific sanctions, we also reviewed targeted financial sanctions that address proliferation and terrorism concerns and can be used against any party, including Iran. In addition, we discussed the sanctions with officials from the Departments of State, Treasury, Commerce, Defense, Energy, Homeland Security (DHS), and Justice, as well as the Central Intelligence Agency. We submitted several requests for specific data to help illustrate U.S. trade ban implementation and enforcement efforts; however, in many cases agencies were not able to fully answer our requests. Due to limitations in how agencies collect and organize their information, we were unable to collect complete data on export licenses issued by Treasury, or Customs and Border Protection seizures, Federal Bureau of Investigation (FBI) or Immigration and Customs Enforcement investigations, or Justice criminal convictions related to sanctions against Iran. We could not compile comprehensive data on the number of ongoing FBI investigations because the FBI considers such data sensitive. We were able to collect data on the extent of civil penalties imposed by Treasury, which we assessed to be sufficiently reliable for our purposes of showing the number of Iran- specific sanction violations since 2003. We were also able to collect data on the number of post-shipment verification checks conducted by Commerce in the past 5 years, which GAO has previously assessed as reliable. To determine the use of Iran-specific laws to impose sanctions, we reviewed and compiled publicly available information on the Department of State’s Web site (www.state.gov/t/isn/c15231.htm), reviewed relevant Federal Register notices, and additional information that was declassified. We determined that such data are sufficiently reliable for our purposes. State officials explained that they do not collect data on direct sanction results, emphasizing that such data falls within the purview of the intelligence community. Regarding the targeted financial sanctions, we were able to collect data on Iran-related designations made under the nonproliferation sanctions, which we determined to be sufficiently reliable. However, Treasury could not provide data on designations made under the antiterrorism sanctions or specify the amount of assets frozen under either set of financial sanctions. To obtain U.S. government views on the impact of sanctions on Iran, we collected publicly available testimonies, speeches, and other remarks made by U.S. officials from the Departments of State, Commerce, Treasury, and DHS from March 2006 through April 2007. We reviewed these documents for statements regarding the U.S. government’s position on the impacts of sanctions on Iran, factors that might lessen their impact, UN sanctions, and other issues identified as key to the U.S. foreign policy strategy for Iran. We also interviewed U.S. officials as well as a judgmentally selected group of experts from think tanks and universities and reviewed numerous scholarly articles and testimonies to gain additional perspectives on the impact of sanctions on Iran. After reviewing the literature on Iran sanctions and conducting a Web-based search of universities and other institutions with research projects or issue areas focusing on U.S. policies toward Iran, we identified a large field of experts. To balance our selection of experts to interview, we identified the institutions with which they were affiliated. These 39 institutions represented a wide variety of perspectives on U.S. foreign policy and, within them, we identified 56 scholars who have written papers and given presentations on Iran sanctions. We then selected six prominent scholars, each from an institution having a different political perspective, and with multiple publications on Iran sanctions. After reviewing their publications and speeches, we interviewed them on a set of questions concerning the impact of unilateral and UN sanctions against Iran, factors that might hinder impact, and other issues identified as key to U.S. foreign policy strategy for Iran. To obtain information on the impact of sanctions in deterring investment in Iran’s energy sector, FACTS Global Energy provided us with a list of recent major agreements between Iran and foreign firms and governments. FACTS Global Energy explained, in response to our questions concerning its methodology and the value of the contracts, that these are publicly reported figures, though the actual worth of the contract may be slightly higher or lower. While FACTS reports that some contracts are legally binding, Iran has been involved in several instances in which these contracts have not been fulfilled. We also substantiated many of these reported agreements based on our review of a variety of sources, including expert reports (Congressional Research Service , Economist Intelligence, Global Insight, Energy Information Administration); scholarly articles; testimony of senior U.S. officials; and other experts. Based on our interviews and checks, we determine the data were sufficiently reliable for the purposes of indicating the estimated value of publicly announced binding contracts between foreign companies and Iran. To determine the major factors that affect U.S. sanctions ability to influence Iran’s behavior, we reviewed numerous scholarly articles, professional economists’ publications, official U.S. documents, and testimonies of officials and experts. In addition, we read open-source documents, including newspaper and journal articles, both national and global. We also interviewed a selected group of experts on Iran, met with agency officials, and attended conferences on the subject. In addition, we collected and analyzed data from several widely used databases of international trade statistics, including International Monetary Fund (IMF) Direction of Trade Statistics and International Financial Statistics National Income database, the UN trade database, U.S. Department of Commerce Trade Statistics, Department of Energy statistics, and United Nations Conference on Trade and Development Foreign Direct Investment statistics. We also reviewed and analyzed proprietary private sector data from an internationally recognized consultant on Iran’s energy sector. We have determined that these data are sufficiently reliable for the purposes for which they were used in this report. To determine the effect of U.S. sanctions on U.S. trade with Iran, we used 1986 to 2006 U.S. trade statistics from the Department of Commerce, Bureau of the Census Web-based database. We converted these data from nominal dollars to 2006 dollars using Department of Commerce, Bureau of Economic Analysis U.S. export and import commodity price deflators from the online database. We also analyzed these data at the 2-digit commodity level to determine what goods the United States exported to and imported from Iran in various years and the relative importance of U.S. trade to Iran for various years encompassing the imposition of the trade bans. We used IMF Direction of Trade Statistics (May 2007 CD ROM) to analyze trends in Iran’s trade, exports and imports, as well as Iran’s trade with the world by major country groupings and individual partners, from 1986 to 2006. We determined that the U.S. commodity price deflators noted above were not appropriate deflators for the purpose of analyzing Iran’s global trade. Thus, we converted the annual export and import data, which IMF reports in U.S. dollars, into 2006 dollars using the following methodology. We converted annual dollar trade flows to Iranian rials using an exchange rate conversion factor from the World Bank’s World Development Indicators Online. This conversion factor, known as the DEC alternative conversion factor, is, as a rule, the official exchange rate reported in the IMF’s International Financial Statistics. This alternative conversion factor differs from the official rate when the official exchange rate is judged to diverge by an exceptionally large margin from the rate actually applied in international transactions. In such cases, it employs a method known as the Atlas method to average exchange rates for a given year and the two preceding years, adjusted for differenced in rates of inflation between the country and a specified groups of major trading countries. For 1991 and 1992, for which the World Bank does not publish a DEC alternative conversion factor for Iran, we constructed conversion rates by applying rates of change exhibited in a purchasing power parity conversion rate for Iran, from the World Development Indicators Online, from 1990 to1993. As Iran does not publish separate price indices for exports and imports, in their place we use the Iranian gross domestic product (GDP) deflator from the World Bank’s World Development Indicators Online to convert trade flows into 2006 rials. We then use the official 2006 exchange rate (which happens to be the same as the DEC conversion factor) to express these trade flows in constant 2006 dollars. This methodology preserves the real growth rates computed in real Iranian rials. Thus, it reflects how Iran may view its global trade when adjusted for exchange rate anomalies and price inflation. We obtained general information on other countries’ trade in sensitive goods (arms, aircraft, and nuclear equipment and technology) from publicly available official sources, including State Department reports and testimonies, Department of Justice data, the unclassified National Intelligence Estimate, and CRS reports and testimonies. To identify countries and the value of their exports to Iran of possibly sensitive items, we used the global shipping company DHL’s online interactive product classification tool to identify Harmonized System (HS) trade codes in the export control category 0: Nuclear materials, facilities, equipment and miscellaneous items. We then used the UN trade database to identify countries and their reported value of exports to Iran for these items. The Department of Energy’s Energy Information Administration (EIA) provided data on Iran’s position in world oil and gas reserves and production, gasoline consumption, and export earnings. We calculated Iran’s oil export revenue as a percent of Iran’s GDP using reporting countries’ crude oil import statistics from Iran and GDP data from the most currently available IMF International Financial Statistics CD ROM (December 2006). To determine top Iranian crude oil export destinations and respective country shares, we used UN trade statistics at the 2-digit commodity level (HS2709), for the period 1989 to 2005, and ranked countries by dollar value and country share of crude oil exports from Iran. For the top recipients of Iran’s crude oil, we also calculated each country’s crude oil imports from Iran as a percent of that country’s total crude oil imports to demonstrate the relative importance of Iranian crude oil to these countries. We also used 2-digit commodity level (HS2710 and HS2711) UN trade statistics to determine the major suppliers of refined petroleum products to Iran. We based our assessment of Iran’s near-term growth prospects on a review of economists’ reports on Iran, including IMF’s 2007 Article IV consultation with Iran and country reports on Iran from Economists Intelligence Unit and Global Insight. We also utilized proprietary information obtained from FACTS Global Energy regarding current developments in Iran’s energy sector. We supplemented our review with reports on Iran from other official sources, including CRS and the Department of Energy’s EIA. To determine the development and current status of Iran’s nuclear program, we reviewed documents from the International Atomic Energy Agency, an independent agency affiliated with the United Nations. We also reviewed reports by the CRS specific to Iran’s nuclear program and proliferation concerns. Finally, we reviewed the November 2007 unclassified National Intelligence Estimate on Iran. We also reviewed State and other documents to examine Iran’s broad proliferation efforts. To identify continued behavior by the government of Iran that establishes continued support for terrorism, we reviewed the Department of State’s 2006 Country Report on Terrorism, other unclassified documentation (such as Department of State testimonies and CRS reports) as well as classified information. To trace the development of UN sanctions against Iran for its efforts to enrich uranium and possibly develop nuclear weapon capability, we reviewed UN Security Council (UNSC) resolutions 1696 (2006), 1737 (2006), and 1747 (2007) and reports and documents from the UNSC 1737 Sanctions Committee. We also reviewed documentation from the Department of State and CRS. The State Department’s Bureau for International Organization Affairs declined to meet with us, which precluded direct contact with the United Nations. The Bureau stated that negotiations in the UNSC were ongoing at the time. We conducted our review from November 2006 to November 2007 in accordance with generally accepted government auditing standards. The following table illustrates various major agreements between Iran and foreign firms and governments in Iran’s energy sector. The table is not intended to imply a complete or thorough listing of foreign deals. Because several of these deals are in progress, we are making the conservative assumption that these agreements, at a minimum, express commercial interest between Iran and the foreign party to trade, finance or underwrite a project in Iran’s energy sector. The following are GAO’s comments on the Department of the Treasury’s letter dated December 6, 2007. 1. GAO has acknowledged Treasury’s efforts to identify the impact of financial sanctions as appropriate in the report. While Treasury assesses such impact, we maintain that a larger impact assessment of all U.S. sanctions has never been undertaken. 2. Our report acknowledges various broad positive impacts of sanctions. 3. Treasury’s letter included an attachment with numerous technical comments that we incorporated into the report as appropriate. The following are GAO’s comments on the Department of Commerce’s letter dated November 1, 2007. 1. We reviewed Commerce’s classified technical comment and considered it in revising our report. 2. We incorporated this information into the report. In addition to the person named above, Tet Miyabara, Assistant Director; Kathryn Bernet; Lynn Cothern; Aniruddha Dasgupta; Martin De Alteriis; Leslie Holen; Bruce Kutnick; Grace Lui; Roberta Steinman; Anne Stevens; and Eddie Uyekawa made key contributions to this report. | The 2006 U.S. National Security Strategy stated that the United States faces challenges from Iran, including Iran's proliferation efforts and involvement in international terrorism. To address these concerns, the United States employs a range of tools, including diplomatic pressure, a military presence in the Gulf, and sanctions. A U.S. sanction is a unilateral restriction or condition on economic activity imposed by the United States for reasons of foreign policy or national security. We were asked to review (1) U.S. sanctions targeting Iran and their implementation, (2) reported sanction impacts, and (3) factors limiting sanctions. To conduct the review, we assessed trade and sanction data, information on Iran's economy and energy sector, and U.S. and international reports on Iran, and discussed sanctions with U.S. officials and Iran experts. Since 1987, U.S. agencies have implemented numerous sanctions against Iran. First, Treasury oversees a ban on U.S. trade and investment with Iran and filed over 94 civil penalty cases between 2003 and 2007 against companies violating the prohibition. This ban may be circumvented by shipping U.S. goods to Iran through other countries. Second, State administers laws that sanction foreign parties engaging in proliferation or terrorism-related activities with Iran. Under one law, State has imposed sanctions in 111 instances against Chinese, North Korean, Syrian, and Russian entities. Third, Treasury or State can use financial sanctions to freeze the assets of targeted parties and reduce their access to the U.S. financial system. U.S. officials report that U.S. sanctions have slowed foreign investment in Iran's petroleum sector, denied parties involved in Iran's proliferation and terrorism activities access to the U.S. financial system, and provided a clear statement of U.S. concerns to the rest of the world. However, other evidence raises questions about the extent of reported impacts. Since 2003, the Iranian government has signed contracts reported at about $20 billion with foreign firms to develop its energy resources. Further, sanctioned Iranian banks may fund their activities in currencies other than the dollar. Moreover, while Iran halted its nuclear weapons program in 2003, according to the November 2007 National Intelligence Estimate, it continues to enrich uranium, acquire advanced weapons technology, and support terrorism. Finally, U.S. agencies do not systematically collect or analyze data demonstrating the overall impact and results of their sanctioning and enforcement actions. Iran's global trade ties and leading role in energy production make it difficult for the United States to isolate Iran and pressure it to reduce proliferation and support for terrorism. For example, Iran's overall trade with the world has grown since the U.S. imposed sanctions, although this trade has fluctuated. Imports rose sharply following the Iran-Iraq war in 1988 and then declined until 1995; most export growth followed the rise in oil prices beginning in 2002. This trade included imports of weapons and nuclear technology. However, multilateral UN sanctions began in December 2006. |
The contracting processes, activities, and challenges associated with rebuilding Iraq can be viewed as similar to, albeit more complicated than, those DOD normally confronts. We and others have already reported on the large and continuing drain on reconstruction dollars to meet unanticipated security needs. Further, multiple players with diffuse and changing responsibilities have had large roles in rebuilding Iraq, complicating lines of authority and accountability. Additionally, rebuilding a nation after decades of neglect and multiple wars is an inherently complex, challenging, and costly undertaking. From May 2003 through June 2004, the Coalition Provisional Authority (CPA), led by the United States and the United Kingdom, was the United Nations recognized authority responsible for the temporary governance of Iraq and for overseeing, directing, and coordinating reconstruction efforts. During 2003, several agencies, most notably the U.S. Agency for International Development (USAID) and the U.S. Army Corps of Engineers, played a role in awarding and managing initial reconstruction contracts. To coordinate and manage the $18.4 billion in reconstruction funding provided in fiscal year 2004, the CPA established a multi-tiered contracting approach for Iraq reconstruction activities. The CPA, through various military organizations, awarded the following contracts: 1 program management support contract to oversee reconstruction efforts; 6 sector program management contracts to coordinate reconstruction efforts specific to each sector; and 12 design-build contracts to execute specific construction tasks. DOD is now emphasizing greater use of local Iraqi firms to perform reconstruction work that was previously intended to be performed by the design-build contractors. With the establishment of Iraq’s interim government in June 2004, the CPA’s responsibilities were transferred to the Iraqi government or to U.S. agencies. The Department of State is now responsible for overseeing U.S. efforts to rebuild Iraq. The Project and Contracting Office (PCO), a temporary DOD organization, was tasked with providing acquisition and project management support. In December 2005, DOD merged the PCO with the U.S. Army Corps of Engineers Gulf Region Division, which now supervises DOD reconstruction activities in Iraq. Additionally, the State Department’s Iraq Reconstruction and Management Office is responsible for strategic planning and for prioritizing requirements, monitoring spending, and coordinating with the military commander. USAID continues to award its own contracts, which are generally associated with economic assistance, education and governance, and certain infrastructure projects. The United States has made some progress in restoring Iraq’s essential services, but as of August 2006, such efforts generally have not met prewar production levels or U.S. goals. Reconstruction activities have focused on restoring essential services, such as refurbishing and repairing oil facilities, increasing electrical generating capacity, and restoring water treatment plants. About one-third of DOD’s construction work remains, and DOD estimates that some work is not planned for completion until late 2008. Continued violence, however, may make it difficult for the United States to achieve its goals. For August 2006, the U.S. embassy reported that the oil, electricity, and water sectors generally performed below the planned U.S. goals. Specifically, Crude oil production capacity was reported as about 2.4 million barrels per day (mbpd), below the prewar level of 2.6 mbpd and the desired goal of 3 mbpd. In the electricity sector, peak generation capacity was reported at 4,855 megawatts, above the prewar level of 4,300 megawatts, but below the U.S. goal of 6,000 megawatts. Further, the current demand for power continues to outstrip the available supply of electricity as more Iraqis purchase consumer items and devices requiring electricity to operate. In the water sector, new or restored treatment capacity was reported at about 1.44 million cubic meters per day, compared to the U.S. goal of 2.4 million cubic meters. According to senior CPA and State officials responsible for the U.S. strategy, the CPA’s 2003 reconstruction plan assumed that (1) creating or restoring basic essential services for the Iraqi people took priority over jobs creation and the economy and (2) the United States should focus on long-term infrastructure projects because of the expertise the United States could provide. Further, the strategy assumed that reconstruction efforts would take place in a relatively benign environment. The difficult security environment and persistent attacks on U.S.-funded infrastructure, among other challenges, contributed to project delays, increased costs, and canceling or reducing the scope of some reconstruction projects. As we reported on September 11, 2006, the overall security conditions have grown more complex, as evidenced by increased numbers of attacks and Sunni/Shi’a sectarian strife. The continuing violence may make it difficult for the United States to achieve its goals. The contracting challenges encountered in Iraq are emblematic of systemic issues faced by DOD. A fundamental prerequisite to having good outcomes is a match between well-defined requirements and available resources. At the sector, program, and project levels, the failure to define realistic requirements has had a cascading effect on contracts and made it difficult to take subsequent steps necessary to get to successful outcomes. For example, in the absence of settled requirements, DOD has sometimes relied on what are known as undefinitized contractual actions, which were used extensively in Iraq and can leave the government exposed to increased costs. Managing risks when requirements are in flux requires effective oversight, but DOD lacked the capacity to provide a sufficient acquisition workforce, thereby hindering oversight efforts. In Iraq, as elsewhere, we found instances in which DOD improperly used interagency contracts to meet reconstruction needs. Finally, the underlying market discipline offered by competition can help promote better outcomes, but DOD, like other agencies, was challenged, particularly early on, in its ability to realize the benefits of competition. One or more of these factors can contribute to unsatisfactory outcomes on individual projects; the net effect, however, is that many reconstruction projects did not achieve their intended goals and DOD has incurred unanticipated costs and schedule delays. One of the factors that can contribute to poor DOD acquisition outcomes is the mismatch between wants, needs, affordability, and sustainability. This mismatch was evident in the reconstruction efforts in Iraq. U.S. reconstruction goals were based on assumptions about the money and time needed, which have proven unfounded. U.S. funding was not meant to rebuild Iraq’s entire infrastructure, but rather to lay the groundwork for a longer-term reconstruction effort that anticipated significant assistance from international donors. To provide that foundation, the CPA allocated $18.4 billion in fiscal year 2004 reconstruction funds among various projects in each reconstruction sector, such as oil, electricity, and water and sanitation. As noted by the Special Inspector General, almost immediately after the CPA dissolved, the Department of State initiated an examination of the priorities and programs with the objectives of reprioritizing funding for projects that would not begin until mid- to late-2005 and using those funds to target key high-impact projects. By July 2005, the State Department had conducted a series of funding reallocations to address new priorities, including increasing support for security and law enforcement efforts and oil infrastructure enhancements. One of the consequences of these reallocations was to reduce funding for the water and sanitation sector by about 44 percent, from $4.6 billion to $2.6 billion. One reallocation of $1.9 billion in September 2004 led the PCO to cancel some projects, most of which were planned to start in mid-2005. Changes, even those made for good reasons, make it more difficult to manage individual projects to successful outcomes. Further, such changes invariably have a cascading effect on individual contracts. To produce desired outcomes within available funding and required time frames, DOD and its contractors need to have a clear understanding of reconstruction objectives and how they translate into the terms and conditions of a contract: what goods or services are needed, when they are needed, the level of performance or quality desired, and what the cost will be. When such requirements were not clear, DOD often entered into contract arrangements on reconstruction efforts that posed additional risks. For example, In June 2004, we reported that faced with uncertainty as to the full extent of the rebuilding effort, DOD often authorized contractors to begin work before key terms and conditions, including the work to be performed and its projected costs, were fully defined. The use of undefinitized contract actions, while allowing needed work to begin quickly, can result in additional costs and risks to the government. We found that as of March 2004, about $1.8 billion had been obligated on reconstruction contract actions without DOD and the contractors reaching agreement on the final scope and price of the work. In one case, we found a contract action that had been modified nine times between March and September 2003, increasing estimated costs from $858,503 to about $204.1 million without DOD and the contractor reaching agreement on the scope of work or final price. In September 2005, we reported that difficulties in defining the cost, schedule, and work to be performed associated with projects in the water sector contributed to project delays and reduced scopes of work. We reported that DOD had obligated about $873 million on 24 task orders to rebuild Iraq’s water and sanitation infrastructure, including municipal water supplies, sewage collection systems, dams, and a major irrigation project. We found, however, that agreement between the government and the contractors on the final cost, schedule, and scope of 18 of the 24 task orders we reviewed had been delayed. These delays occurred, in part, because Iraqi authorities, U.S. agencies, and contractors could not agree on scopes of work and construction details. For example, at one wastewater project, local officials wanted a certain type of sewer design that increased that project’s cost. Earlier this week, we issued a report on how DOD addressed issues raised by the Defense Contract Audit Agency (DCAA) in audits of Iraq- related contract costs. We again noted that DOD frequently authorized contractors to begin work before reaching agreement on the scope or price of the work. In such cases, we found that DOD contracting officials were less likely to remove costs questioned by DCAA from a contractor’s proposal when the contractor had already incurred these costs. For example, of the 18 audit reports we reviewed, DCAA issued 11 reports on contract actions where more than 180 days had elapsed between the beginning of the period of performance to final negotiations. For 9 of these audits, the period of performance DOD initially authorized for each contract action concluded before final negotiations took place. In one case, DCAA questioned $84 million in its audit of a task order proposal for an oil mission. In this case, the contractor did not submit a proposal until a year after the work was authorized, and DOD and the contractor did not negotiate the final terms of the task order until more than a year after the contractor had completed work (see fig. 1). In the final negotiation documentation, the DOD contracting official stated that the payment of incurred costs is required for cost-type contracts, absent unusual circumstances. In contrast, in the few audit reports we reviewed where the government negotiated prior to starting work, we found that the portion of questioned costs removed from the proposal was substantial. Instability—such as when wants, needs, and contract requirements are in a state of flux—requires greater attention to oversight, which in turn relies on a capable government workforce. Managing the attendant risks in unstable situations grows in both importance and difficulty. Unfortunately, attention to oversight and a capable government workforce has not always been evident during the reconstruction effort. Such workforce challenges are not unique to Iraq. DOD’s civilian workforce shrank by about 38 percent between fiscal years 1989 and 2002, but DOD performed this downsizing without ensuring that remaining staff had the specific skills and competencies needed to accomplish future DOD missions. In other cases, contractors have taken over support positions that were traditionally filled by government personnel. For example, a contractor began providing intelligence support to the Army in Germany in 1999 and deployed with the Army to Iraq in 2003. The Army, however, found itself unprepared for the volume of Iraqi detainees and the need for interrogation and other intelligence and logistics services. We and others have reported on the impact of the lack of adequate acquisition personnel and high turnover rates on reconstruction efforts. For example, among the lessons learned identified by the Special Inspector General was that one of the CPA’s critical personnel shortcomings was the inadequate link between position requirements and necessary skills. In this case, gaps existed in the experience levels of those hired, as well as in the quality and depth of their experiences relative to their assigned jobs. Similarly, in January 2004, an interagency assessment team was sent to Iraq to review the CPA’s contracting capability. The team found that existing contracting personnel were insufficient to handle the increased workload that was expected with the influx of fiscal year 2004 reconstruction funding and that the CPA needed more individuals with acquisition expertise who could help the programmatic side of the operation. In part, the CPA’s decision to award seven contracts in early 2004 to help better coordinate and manage the fiscal year 2004 reconstruction efforts was in recognition of this shortfall. As a result, DOD finds itself in the position of relying on contractors to help manage and oversee the work of other contractors. At the contract level, having personnel who are trained to conduct oversight, assigned at or prior to contract award, and held accountable for their oversight responsibilities is essential for effective oversight. Our work has shown that if oversight is not conducted, is insufficient, or is not well documented, DOD, and other reconstruction agencies, risk not identifying and correcting poor contractor performance in a timely manner and paying contractors more than the value of the services they perform. For example, Our June 2004 report found that early contract administration challenges were caused, in part, by the lack of sufficient personnel. We found that, due to the lack of government personnel to provide oversight, one contractor may have purchased $7 million in equipment and services that were not specifically authorized under the contract. Similarly, on another contract, to provide subject matter experts to the CPA and Iraqi ministries, DOD officials stated that some experts failed to report to duty or when they did, did not perform as expected. DOD officials attributed such performance issues to the lack of personnel to provide oversight when the experts arrived in Iraq. In July 2005, we noted that USAID obligated an additional $33 million on one of its contracts to pay for unanticipated increases in security costs, which left it short of funds to pay for construction oversight and quality assurance efforts, as well as to fund administrative costs. Our September 2005 report on water and sanitation efforts found that frequent staff turnover affected both the definitization process and the overall pace and cost of reconstruction efforts. For example, new contracting officers had to be brought up to speed and would sometimes ask the contractor to resubmit information in formats different from those previously required. A PCO official also noted that the contracting office in Iraq lacked sufficient staff and equipment and that some of the staff assigned as contracting officers lacked experience with the type of projects the PCO managed. Another area in which workforce shortfalls proved problematic was in DOD’s use and management of interagency contracting vehicles. We identified management of interagency contracting as a high-risk area in January 2005. In recent years, federal agencies have been making a major shift in the way they procure many goods and services. Rather than developing and awarding their own contracts, agencies are making greater use of contracts already awarded by other agencies, referred to as interagency contracting. This practice offers the benefits of improved efficiency and timeliness. Such contracts, however, need to be effectively managed, and their use demands a higher than usual degree of business acumen and flexibility on the part of the acquisition workforce. Our work and that of some agency inspectors general found instances of improper use of interagency contracting, resulting from increasing demands on the acquisition workforce, insufficient training, inadequate guidance, an inordinate focus on meeting customer demands at the expense of complying with sound contracting policy and required procedures, and the lack of clear lines of responsibility and accountability. During the initial stages of reconstruction, we and the DOD Inspector General found instances in which DOD improperly used interagency contracts for many of the same reasons. For example, In March 2004, the DOD Inspector General reported that a review of 24 contract actions awarded by a DOD component on behalf of the CPA revealed that DOD circumvented contracting rules, including improperly using General Services Administration federal supply schedule contracts and improperly contracting for personal services. The Inspector General attributed this condition to the need to quickly award contracts and to DOD’s failure to plan for the acquisition support the CPA needed to perform its mission. In June 2004, we noted that a task order awarded by the Air Force to provide logistical support and equipment to support USAID’s mission in Baghdad and at other sites in Iraq was, in part, outside the scope of the contract. The Air Force indicated that it was issuing additional guidance to ensure that future task orders were within the scope of the contract. In April 2005 we reported that a lack of effective management controls—in particular insufficient management oversight and a lack of adequate training—led to breakdowns in the issuance and administration of task orders for interrogation and other services by the Department of the Interior on behalf of DOD. These breakdowns included issuing 10 out of 11 task orders that were beyond the scope of underlying contracts, in violation of competition rules; not complying with additional DOD competition requirements when issuing task orders for services on existing contracts; not properly justifying the decision to use interagency contracting; not complying with ordering procedures meant to ensure best value for the government; and not adequately monitoring contractor performance. Because officials at Interior and the Army responsible for the orders did not fully carry out their roles and responsibilities, the contractor was allowed to play a role in the procurement process normally performed by the government. Further, the Army officials responsible for overseeing the contractor, for the most part, lacked knowledge of contracting issues and were not aware of their basic duties and responsibilities. Finally, one tool that can help mitigate acquisition risks is to rely on the discipline provided by market forces when contracts are awarded under full and open competition—that is, when all responsible prospective contractors are afforded the opportunity to compete. During the initial stages of reconstruction, we found that agencies were unable to take full advantage of competition, in part because of the relatively short time— often only weeks—to award the first contracts. Our June 2004 report found that agencies generally complied with applicable requirements for competition when awarding new contracts but did not always do so when issuing task orders against existing contracts. We found that 7 of the 11 task orders we reviewed were for work that was, in whole or in part, outside the scope of the existing contracts. In each of these cases, the out- of-scope work should have been awarded using competitive procedures or supported with a justification and approval for using other than full and open competition in accordance with legal requirements. Given the urgent need for reconstruction efforts, we noted that the authorities under the competition laws provided agencies ample latitude to justify their approach. Such latitude presupposes that the rationale for such actions is valid; if not, then the loss of the benefits from competition cannot be easily justified. For example, in November 2005, we sustained a protest of a sole- source contract awarded by the Air Force in December 2004 for bilingual- bicultural advisers that was placed under an environmental services contract, which, on its face, did not include within its scope the bilingual- bicultural adviser requirement. We concluded that the agency’s efforts were so fundamentally flawed as to indicate an unreasonable level of advance planning. In the same decision, we sustained a protest of a second, follow-on sole-source contract awarded by the Air Force in July 2005 to the same company, in which the justification and approval prepared in support of the contract was premised on the conclusion that the contractor was the only responsible source, yet the capabilities of other firms were not in fact considered. The lack of advance planning, the failure to meaningfully consider other sources, and the attempts to justify the use of sole-source contracts originated, in large part, from the desire and pressure to meet the customer’s needs in a short time frame. At the time of our decision, the initial contract was substantially complete, but we recommended that the agency promptly obtain competition for the requirement or prepare a properly documented and supported justification and approval for the second contract. Overall, the Special Inspector General has reported that competition has improved for Iraq reconstruction projects since the early reconstruction efforts. Next month we will issue a congressionally mandated report that will provide an assessment of competition for actions subsequent to our June 2004 report. The reconstruction contracting problems we and others have reported on over the last several years are emblematic of contracting problems we have identified in numerous other situations but with more dramatic consequences for failure, as the nature of the task for the United States is so large and so costly. While some of the factors I discussed today— mismatches between needs, wants, affordability, and sustainability; oversight and workforce challenges; improper use of contracting approaches; and competition issues—were more prevalent in the initial stages of reconstruction, the risks posed by others have not yet been fully mitigated. Understanding not just where we are today, but why, is important to enable DOD to make corrections and prevent repeating mistakes. Just as multiple factors contribute to success or failure, multiple actors play a role in achieving successful acquisition outcomes, including policy makers, program managers, contracting officers, and the contractors themselves. Looking to the future, about one-third of DOD’s planned construction work remains to be completed, including some work that is not planned for completion until the end of 2008. It is not too late for DOD to learn from its past difficulties and provide adequate oversight on these remaining projects. Delivering these projects on time and within cost is essential if we are to maximize the return on this investment and make a difference in the daily lives of the Iraqi people and help to provide the services they need—safe streets, clean water, reliable electricity, and affordable health care. - - - - - Mr. Chairman and members of the committee, this concludes my prepared statement. I will be happy to answer any questions you may have. In preparing this testimony, we relied primarily on our completed and ongoing reviews of efforts to rebuild Iraq that we have undertaken since 2003, as well as our work related to selected DOD contract management issues. We conducted these reviews in accordance with generally accepted government auditing standards. We also reviewed audit reports and lessons learned reports issued by the Special Inspector General for Iraq Reconstruction and work completed by the Inspector General, Department of Defense. We conducted this work in accordance with generally accepted government auditing standards in September 2006. For questions regarding this testimony, please call Katherine V. Schinasi at (202) 512-4841 or on schinasik@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the back page of this statement. Key contributors to this statement were Daniel Chen, Lily Chin, Tim DiNapoli, Kate France, Dave Groves, John Hutton, Chris Kunitz, Steve Lord, Micah McMillan, Kate Monahan, Mary Moutsos, Ken Patton, Jose Ramos, and Bill Woods. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The United States, along with its coalition partners and various international organizations, has undertaken a challenging, complex, and costly effort to stabilize and rebuild Iraq. The Department of Defense (DOD) has responsibility for a significant portion of the reconstruction effort. Amid signs of progress, the coalition faces numerous political, security, and economic challenges in rebuilding Iraq. Within this environment, many reconstruction projects have fallen short of expectations, resulting in increased costs, schedule delays, reduced scopes of work, and in some cases project cancellations. This testimony (1) discusses the overall progress that has been made in rebuilding Iraq and (2) describes challenges faced by DOD in achieving successful outcomes on individual projects. This testimony reflects our reviews of reconstruction and DOD contract management issues, as well as work of the Special Inspector General for Iraq Reconstruction. In our previous reports, we have made several recommendations to improve outcomes in Iraq. DOD generally agreed with our recommendations. Overall, the United States generally has not met its goals for reconstruction activities in Iraq with respect to the oil, electricity, and water sectors. As of August 2006, oil production is below the prewar level, and the restoration of electricity and new or restored water treatment capacity remain below stated goals. One-third of DOD's planned construction work still needs to be completed and some work is not planned for completion until late 2008. Continuing violence in the region is one of the reasons that DOD is having difficulty achieving its goals. The contracting challenges encountered in Iraq are emblematic of systemic issues faced by DOD. When setting requirements for work to be done, DOD made assumptions about funding and time frames that later proved to be unfounded. The failure to define realistic requirements has had a cascading effect on contracts and has made it difficult to take subsequent steps to get successful outcomes. For example, in the absence of settled requirements, agencies sometimes rely on what are known as undefinitized contract actions, which can leave the government exposed to increased costs. Further, DOD lacked the capacity to provide effective oversight and manage risks. We also found that DOD, at times, improperly used interagency contracts and was not able to take advantage of full and open competition during the initial stages of reconstruction. Just as multiple factors contribute to success or failure, multiple actors play a role in achieving successful acquisition outcomes, including policy makers, program managers, contracting officers, and the contractors themselves. |
The nearly 12,000 federally insured banks and thrifts in the United States, which hold more than $5 trillion in assets, are regulated and supervised by four federal agencies with similar and sometimes overlapping regulatory and supervisory responsibilities. Although many industry representatives, legislators, and regulators have in the past recognized the need for consolidation and modernization of federal bank oversight, major reform proposals changing the structure of bank and thrift oversight have not been adopted. This report was prepared in response to a request from Congressman Charles E. Schumer that we provide information to help evaluate efforts to modernize the U.S. system of financial industry oversight and identify potential avenues for such modernization. Much of the information in this report is based on our studies of the structures and operations of bank regulation and supervision (oversight) activities in Canada, France, Japan, Germany, and the United Kingdom. This report focuses on the oversight of two major categories of depository institutions: commercial banks and thrifts. Commercial banks and thrifts originally served very different purposes and markets. Commercial banks issued debt payable on demand, which was backed by short-term commercial loans. The customers of commercial banks tended to be businesses and wealthy individuals seeking liquid deposit accounts. Savings and loan associations, however, used deposits to fund home mortgages of their members. But, because of the long terms of mortgages, members were restricted in their ability to withdraw their funds. Savings banks were initially designed to provide a range of financial services to the small saver. Their asset portfolios were generally more diversified than those of savings and loan associations to enable them to provide more flexible deposit terms. Despite the historical differences between these institutions, the powers and services of banks and thrifts have converged over time with few practical differences remaining in their authorities, except that these institutions continue to be subject to different regulatory schemes. (See app. I for more information on the history of U.S. bank and thrift oversight.) At the end of 1995, the United States had nearly 12,000 banking institutions. In this report, we refer to commercial banks and thrifts collectively as banking institutions. These institutions held about $5.3 trillion in loans and other assets (see table 1.1). As shown in table 1.1, the 9,941 commercial banks held 81 percent of total bank and thrift assets at the end of 1995. The 2,029 thrifts held 19 percent. Holding companies, which are established for a variety of business, regulatory, and tax reasons, are the dominant form of banking structure in the United States. In fact, 96 percent of the assets of all U.S. commercial banks are in banks that are part of a holding company. As of December 31, 1995, about 6,122 bank holding companies and 895 thrift holding companies were operating in the United States. Of those, 4,494 bank holding companies and 833 thrift holding companies each held only 1 bank or thrift. Holding companies may consist of a parent company, banking subsidiaries, nonbanking subsidiaries, and even other holding companies—each of which may have its own banking or nonbanking subsidiaries. Figure 1.1 is a simplified illustration of a hypothetical holding company with wholly owned banking and nonbanking subsidiaries. Parent companies own or control subsidiaries through the ownership of voting stock and generally are “shell” corporations—that is, they do not have operations of their own. Banking subsidiaries are separately chartered banks subject to the same regulation and capital requirements that apply to other banking institutions. Nonbanking subsidiaries are companies that may be engaged in a variety of businesses other than banking; however, any nonbanking activities of a bank holding company subsidiary must be closely related to the business of banking and produce a public benefit. Thrift holding companies may be owned by or own any type of financial services or other business. Many bank holding companies have established nonbank subsidiaries engaged in consumer finance, trust services, leasing, mortgage lending, electronic data processing, insurance underwriting, management consulting services, and securities brokerage services. Holding companies in the United States may also have multiple tiers. For example, as we mentioned above, holding companies may have subsidiary holding companies that have their own banking or nonbanking subsidiaries. Banking subsidiaries may also have their own subsidiaries. However, the activities of these bank subsidiaries are limited to those allowable for their parent institution. The largest holding companies in the United States often have very complex, multitiered structures. Bank and thrift holding companies are particular to the U.S. financial system. In many other countries, nonbanking activities may be conducted either in a bank or in subsidiaries of a bank rather than in subsidiaries of a parent company. The structure of the U.S. banking industry has changed substantially over the past 10 years. The industry is consolidating in response to the removal of legal barriers to geographic expansion, advancing technologies, and the globalization of wholesale banking, among other things. Between 1985 and 1995, the number of banks and thrifts in the United States fell by about 34 percent due to consolidation through mergers and also bank and thrift failures. The number of banks decreased by 4,476—from 14,417 to 9,941. The number of thrifts decreased by 1,597—from 3,626 to 2,029. Industry consolidation has been characterized by a greater concentration of deposits among the largest banking companies in the country. For example, the 10 largest bank holding companies controlled 17.4 percent of bank deposits in 1984; they increased this share to 25.6 percent in 1994. Similarly, the 10 largest thrift institutions increased their share of deposits from 12.4 percent to 17 percent. However, although nationwide concentration has been increasing over the past 10 years, increases in local market concentration have been much more modest relative to the changes at the national level. According to industry analysts, this has occurred because banking institutions not located in the same local market have merged, and constraints imposed by antitrust laws have helped to prevent increases in concentration at the local level. The nature of the activities that banking institutions engage in has also changed drastically over the past several decades. Although traditional lending still dominates banking institutions’ balance sheets, banking institutions have been moving toward more nontraditional products, such as mutual funds, securities, and derivatives and other off-balance sheet products. Banking institutions, with about $5.3 trillion in assets at the end of 1995, constitute the largest single segment of the financial services industry. However, banking institutions’ share of the financial services industry shrank from about 45 percent in 1985 to about 30 percent in 1995. This decrease has been attributed to greater competition in the financial services industry. Consumers can now choose from a variety of providers in obtaining financial services once offered only by commercial banks and thrifts. For example, money market mutual funds, securities firms, and insurance companies all now offer interest-bearing transaction accounts. Further, although banks and thrifts were long regarded as the primary providers of consumer credit, such credit is now routinely provided by finance companies as well as by a wide variety of retail firms through credit cards and other means. The federal system of oversight of banking institutions in the United States is a highly complex system. Federal responsibilities for bank authorization, regulation, and supervision are assigned to three bank regulators and one thrift regulator that have jurisdiction over specific segments of the banking industry (see table 1.2). Although Treasury plays no formal role in bank oversight, it has some related responsibilities. The Office of the Comptroller of the Currency (OCC) currently has primary responsibility for regulating and supervising national banks—that is, banks with a federal charter. OCC also has primary responsibility for regulating and supervising federal branches and agencies of foreign banks operating in the United States. As of December 31, 1995, OCC was the primary federal supervisor of 2,861 of the 11,970 banking institutions in the United States. Those banks held about 45 percent of the total U.S. banking assets in the United States. The Federal Reserve System (FRS) is the federal regulator and supervisor for bank holding companies and their nonbank subsidiaries, and it is the primary federal regulator for state-chartered banks that are members of FRS. It is also a federal regulator for foreign banking organizations operating in the United States. In addition, it regulates foreign activities and investments of FRS member banks (national and state), Edge corporations, and holding companies. As of December 31, 1995, FRS had primary supervisory responsibility for 1,041 of the 11,970 banking institutions in the United States. The assets of these banks represented about 18 percent of the total U.S. banking assets. As of December 31, 1995, FRS also had responsibility for regulating 6,122 bank holding companies, 393 foreign branches, and 153 foreign agencies operating in the United States. The Federal Deposit Insurance Corporation (FDIC) is the primary federal regulator and supervisor for federally insured state-chartered banks that are not members of FRS and for state savings banks whose deposits are federally insured. FDIC is also responsible for administering the Bank Insurance Fund (BIF) and the Savings Association Insurance Fund (SAIF).Additionally, FDIC is responsible for resolving failed banks and for the disposition of assets from failed banking institutions. At the end of 1995, FDIC was the primary federal regulator and supervisor for 6,632 of the 11,970 insured banking institutions. These banking institutions represented 22 percent of the total U.S. banking assets. The Office of Thrift Supervision (OTS) is the primary regulator of all federally- and state-chartered thrifts whose deposits are federally insured and their holding companies. At the end of 1995, it was the primary federal regulator of 1,436 institutions, whose assets represented 14 percent of the total assets held by banking institutions. The Department of the Treasury (Treasury) is one of 14 executive departments that make up the Cabinet. It is headed by the Secretary of the Treasury and performs four basic functions, of which formulating and recommending economic, financial, tax, and fiscal policies is the one most directly related to bank oversight. Ultimately, Treasury is responsible for financially backing up the U.S. guarantee of the deposit insurance fundsand may also approve special resolution options for financial institutions whose failure “could threaten the entire financial system.” In addition, Treasury is a principal player in the development of legislation and policies affecting the financial services industries. Treasury also shares responsibility for managing any systemic financial crises, coordinating financial market regulation, and representing the United States on international financial markets issues. A primary objective of banking institution regulators is to ensure the safe and sound practices and operations of individual banking institutions through regulation, supervision, and examination. The intent of regulators under this objective is primarily to protect depositors and taxpayers from loss, not to prevent banking institutions from failing. To help accomplish this goal, the government has chosen to protect deposits through federal deposit insurance, which provides a safety net to depositors. Financial market stability is also considered a primary goal of banking institution regulators. Because banking institutions play an important role as financial intermediaries that borrow and lend funds, public confidence in banking institutions is critical to economic stability at local and national levels. In support of market stability, regulators seek to resolve problems of financially troubled institutions in ways that maintain confidence in banking institutions and thus prevent depositor runs that could jeopardize the stability of financial markets. Regulators are also aware that the stability of the banking industry depends both on the ability of banking institutions to compete in an increasingly competitive environment and on maintaining competition within the industry. Regulators recognize that although their supervisory oversight should be sufficient to oversee safe and sound bank operations and practices, it should not be so onerous as to stifle the industry and impair banks’ ability to remain competitive with financial institutions in other industries and in other countries. Bank regulators also seek to maintain competition by assessing compliance with antitrust laws. Fairness in, and equal access to, banking services is also an important goal of banking institution regulators. Bank regulators seek to ensure access by assessing institutions’ compliance with consumer protection laws. This goal of the banking regulators is unique to the U.S. bank regulatory structure. While the four federal banking regulators share many oversight responsibilities, some of the principal responsibilities of FDIC and FRS fall outside direct regulation and supervision but are related to the goals of bank oversight. For FDIC these include responsibility for administration of the federal deposit insurance funds, resolution of failing and failed banks, and disposition of failed bank assets. For FRS, these include responsibility for monetary policy development and implementation, liquidity lending, and payments and settlements systems operation and oversight. In addition, all four federal regulators may play a role in the management of financial crises, depending on the nature of the crisis. Although FDIC supervises a large number of banking institutions, its primary function is to insure banking institutions’ deposits up to $100,000. FDIC administers BIF—which predominantly protects depositors of commercial banks—and SAIF—which predominantly protects depositors of thrift institutions. FDIC receives no appropriated government funding. BIF is funded wholly through premiums paid on the deposits of member institutions and with some borrowing authority from the government under prescribed conditions, such as liquidity needs of the insurance funds. SAIF is primarily funded through premiums paid on the deposits of thrift institutions and has similar borrowing authority. Both BIF and SAIF are required by statute to have a minimum reserve ratio of 1.25 percent of insured deposits. According to FDIC, as of December 31, 1995, BIF’s fund balance exceeded the ratio 1.30, but SAIF was not fully capitalized. FDIC relies on primary regulators to verify that institutions outside its direct supervisory jurisdiction are operating in a safe and sound manner. Examinations are to be done by the institution’s primary regulators on all the institutions FDIC insures, and FDIC is to receive copies of all examination reports and enforcement actions. However, FDIC may also protect its interest as the deposit insurer through its backup authority. This allows FDIC to examine potentially troubled banking institutions and take enforcement actions, even when FDIC is not the institution’s primary regulator. Regardless of an institution’s primary regulator, only its chartering authority—the state banking commission, OCC, or OTS—has the formal authority to declare that the banking institution is insolvent. Once the chartering authority becomes aware that one of its institutions has deteriorated to the point of insolvency or imminent insolvency, it is to notify FDIC, which is responsible for arranging an orderly resolution. FDIC is required by law to generally select the resolution alternative it determines to be the least costly to BIF and SAIF. To make this least-cost determination, FDIC must (1) consider and evaluate all possible resolution alternatives by computing and comparing their costs on a present-value basis, using realistic discount rates; and (2) select the least costly alternative on the basis of that evaluation. If, however, the least-cost resolution would create a systemic problem—as determined by FDIC’s Board of Directors with the concurrence of the Federal Reserve Board and the Secretary of the Treasury, then, under the Federal Deposit Insurance Corporation Improvement Act (FDICIA), another resolution alternative could be selected. As of June 30, 1996, no systemic problem had been raised by FDIC in making its resolution decisions. Typically—and particularly in the case of large, known to be troubled, institutions—active communication has taken place among the chartering authorities, primary regulators, FDIC, and FRS as liquidity provider. The interaction and coordination typically includes the sharing of examination information, strategies, and economic information, for example. This communication most commonly takes place when the primary regulator considers failure likely so that all regulatory parties can discharge their responsibilities in an orderly manner. When banks fail, FDIC is appointed receiver, directly pays insured claims to depositors or the acquiring bank, and liquidates the remaining assets and liabilities not assumed by the acquiring bank. One of the principal responsibilities of FRS is conducting monetary policy. As stated in the Federal Reserve Act, FRS is “to promote effectively the goals of maximum employment, stable prices, and moderate long-term interest rates.” FRS conducts monetary policy by (1) using open market operations, the primary tool of monetary policy; (2) determining the reserve requirements that banking institutions must hold against deposits; and (3) determining the discount rate charged banking institutions when they borrow from FRS. FRS is to act independently in conducting its monetary policy. FRS also is to act as lender-of-last-resort to ensure that a temporary liquidity problem at a banking institution does not threaten the viability of the institution or the financial system. Using the discount window, FRS may lend to institutions that are experiencing liquidity problems—for example, when these institutions cannot meet deposit withdrawals. However, when acting in this capacity, FRS requires the lending to be collateralized, and it is to be assured by the banking institution’s primary regulator that the institution is solvent. According to FRS officials, institutions generally do not approach FRS for liquidity loans unless they have no alternative. Liquidity lending may be perceived as a sign that an institution is in trouble, despite the fact that FRS is prohibited from lending to nonviable institutions. In addition, FRS has broad responsibility in the nation’s payments and settlements systems. It is mandated by Congress to act as an intermediary in clearing and settling interbank payments by maintaining reserve or clearing accounts for the majority of banking institutions. As a result, it settles the payment transactions by debiting and crediting the appropriate accounts of banking institutions making payments. In addition, FRS also collects checks, processes electronic fund transfers, and provides net settlement services to private clearing arrangements. Depending on the nature of the situation, federal regulators may play a role in financial system crisis management. FRS, for example, often has a significant role in crisis management in its role as a major participant in financial markets through its liquidity lending, payments and settlements, and other responsibilities. A key role of any central bank is to supply sufficient liquidity to the financial system in a crisis. For example, during the 1987 stock market crash, FRS provided liquidity support to the financial system, encouraged major banks to lend to solvent securities firms, coordinated with Treasury, and encouraged officials to keep the New York Stock Exchange open. During the Ohio Savings and Loan crisis in 1985, FRS intervened with liquidity support until a permanent solution to the instability could be developed. Treasury is also involved in resolving major financial crises, while OCC, OTS, and FDIC have played significant roles involving large bank or thrift failures. Many nonbank subsidiaries of banks and bank holding companies are engaged in securities, futures, or insurance activities. These activities are subject to the oversight of the Securities and Exchange Commission (SEC), the Commodity Futures Trading Commission (CFTC), and state insurance regulators, respectively. These regulators may provide information to the Federal Reserve about nonbank subsidiaries of bank holding companies. They may also provide information about nonbank subsidiaries of banks to the responsible primary federal regulator of the parent bank. The primary goals of SEC and CFTC are to maintain fair and orderly markets and public confidence in the financial markets by protecting investors against manipulation, fraud, or other irresponsible practices. The aftermath of the stock market crash of 1929 created a demand for federal oversight of securities and futures activities. The Securities Exchange Act of 1934 created SEC with powers to oversee the securities market exchanges—also called self-regulatory organizations—and to intervene if the exchanges did not carry out their responsibilities for protecting investors. The Commodity Exchange Act of 1936, as amended, governs the trading of commodity futures contracts and options. The Commodity Futures Trading Commission Act of 1974 created the current regulatory structure, consisting of industry self-regulation with government oversight by CFTC. Securities broker-dealers must register with SEC and comply with its requirements for regulatory reporting, minimum capital, and examinations. They must also comply with requirements of the self-regulatory organizations, such as the New York Stock Exchange and the National Association of Securities Dealers. SEC is to monitor broker-dealer capital levels through periodic reporting requirements and regular examinations. CFTC is to review exchange rules, ensure consistent enforcement, and monitor the positions of large traders. CFTC also regulates the activities of various market participants, including futures commission merchants—which must comply with CFTC’s requirements for regulatory reporting, minimum capital, and examinations. In addition, they must comply with the rules imposed by the various exchanges, such as the Chicago Mercantile Exchange and the Chicago Board of Trade as well as the National Futures Association, all of which act as self-regulatory organizations. Regulation of the insurance industry and administration of insurance company receiverships and liquidations are primarily state responsibilities. In general, state legislatures set the rules under which insurance companies must operate. Among their other responsibilities, state insurance departments are to monitor the financial condition of insurers. States use a number of basic methods to assess the financial strength of insurance companies, including reviewing and analyzing annual financial statements, doing periodic on-site financial examinations, and monitoring key financial ratios. State insurance departments are generally responsible for taking action in the case of a financially troubled insurance company. If the insurance company is based in another state, the insurance department can suspend or revoke its license to sell insurance in the department’s state. If a home-based company is failing, the department can put it under state supervision or, in cases of irreversible insolvency, place a company in liquidation. State insurance regulators have established a central structure to help coordinate their activities. The National Association of Insurance Commissioners (NAIC) consists of the heads of the insurance departments of 50 states, the District of Columbia, and 4 U.S. territories. NAIC’s basic purpose is to encourage uniformity and cooperation among the various states and territories as they individually regulate the insurance industry. To that end, NAIC promulgates model insurance laws and regulations for state consideration and provides a framework for multistate “zone” examinations of insurance companies. Congressman Charles E. Schumer asked us to provide information to help Congress evaluate efforts to modernize the U.S. system of federal oversight of banks and thrifts. Our objectives were to (1) discuss previously reported problems with the bank oversight structure in the United States, (2) summarize those characteristics of the five countries’ regulatory structures that might be useful for Congress to consider in any U.S. modernization efforts, and (3) identify potential avenues for modernizing the U.S. banking oversight structure. This report does not address federal oversight of credit unions by the National Credit Union Administration (NCUA), which are also classified as depository institutions. Credit unions hold only a small percentage of all depository institution assets—about 5.5 percent. Also, although the legal and practical distinctions between thrifts and banks have all but disappeared in recent years, the core of credit union business remains traditional consumer lending activities. Finally, the most recent proposals to modernize oversight of financial institutions have not included oversight of credit unions within their scope. To address the objectives of this report, we conducted interviews with senior supervisory officials from the Board of Governors of FRS, Federal Reserve Bank of New York, FDIC, OCC, OTS, and SEC. They also provided us with various documents and statistics, including bank and thrift examination manuals, guidance to examiners and banking industry officials, and statistics on the banking industry. In addition to our interviews with U.S. supervisory officials, we met with officials representing the banking industry, including officials from the American Bankers Association, Independent Bankers Association of America, and Conference of State Bank Supervisors. We also met with officials from the accounting profession, including officials from the American Institute of Certified Public Accountants. In conducting our work we also gathered information from many other sources. These include studies of the history of the banking industry; records from congressional hearings related to regulatory restructuring; and professional literature concerned with the industry structure, regulation, and external audits. We also reviewed relevant banking acts and regulations. This review does not constitute a formal legal opinion on the requirements of the laws. Much of this report was based on our reports of the structures and operations of bank regulation and supervision activities in Canada, France, Japan, Germany, and the United Kingdom. When preparing these reports, we interviewed regulatory and industry officials in each country and reviewed relevant banking laws, regulations, industry statistics, and other industry studies. These reports did not assess the effectiveness or efficiency of bank oversight in the countries studied. This report also draws on extensive work that we have done over the past several years on depository institutions, the deposit insurance program, the securities and insurance industries, international competitiveness, and other aspects of the financial services system in the United States. A comprehensive list of our products addressing issues related to the financial services industry is included at the end of this report. (See Related GAO Products.) We conducted our work from July 1995 through June 1996 in accordance with generally accepted government auditing standards. We provided a draft of this report for comment to the heads of FRS, FDIC, OCC, OTS, and Treasury. FRS, FDIC, OCC, and OTS provided written comments, which are discussed at the end of chapter 4 and reprinted in appendixes IV to VII. Treasury did not provide written comments. Each agency also provided technical comments, which we incorporated where appropriate. All four federal oversight agencies share several supervisory and regulatory responsibilities, including developing and implementing regulations, taking enforcement actions, conducting examinations, and off-site monitoring. Chartering is the responsibility of 2 federal agencies, as well as all 50 states. This structure of shared responsibilities has been characterized by some observers as being inherently inefficient. Furthermore, our work has shown that despite good faith efforts to coordinate their policies and procedures, the four federal bank oversight agencies have often differed on important issues of bank supervision and regulation. The division of primary oversight responsibilities among the four oversight agencies is not based on specific areas of expertise, functions, or activities, either of the regulator or the banks for which they are responsible, but based on institution type—thrift or bank, bank charter type—national or state, and FRS membership. Consequently, the four oversight agencies share responsibility for developing and implementing regulations, taking enforcement actions, and conducting examinations and off-site monitoring. Regulations are the primary vehicle through which regulators elaborate on what the laws mean, clarify provisions of the laws, and provide guidance on how the laws are to be implemented. Regulations typically have the force of law—that is, they can be enforced through a court of law. Regulators have, in some cases, issued guidelines rather than regulations because guidelines provide them greater flexibility to change or update as experience dictates. Guidelines, however, are not directly enforceable in court. In most cases, each regulator is responsible for issuing its own regulations for the banking institutions under its jurisdiction. This may result in four sets of regulations implementing essentially the same provision of the law. Unless regulatory coordination in developing regulations is mandated by law, the regulators may develop regulations independently. Even if the regulators develop regulations jointly, on an interagency basis, they each still issue similar individual regulations under their own legal authority. In some instances, law designates a specific regulator to write the regulation for all banking institutions. For instance, FRS has sole rulemaking responsibility for many consumer protection laws. Each regulator has the authority to take enforcement actions against financial institutions under its jurisdiction. Regulators may initiate informal or formal enforcement actions to get bank management to correct unsafe and unsound practices or conditions identified during the banking institution examination. Regulators have broad discretion in deciding which, if any, regulatory action to choose, and they typically make such decisions on a case-by-case basis. Regulators have said that they prefer to work with cooperative banking institution managers to bring about necessary corrective actions as opposed to asserting formal actions. However, bank regulatory officials have also said that they may take more stringent action when the circumstances warrant it. Under agency guidelines, the regulators are to use informal actions for banking institutions if (1) the institution’s overall strength and financial condition make failure a remote possibility and (2) management has demonstrated a willingness to address supervisory concerns. Informal actions generally include meeting with banking institution officers or board of directors to obtain agreement on improvements needed in the safety and soundness of the institution’s operations, requiring banks to issue resolutions to issue commitment letters to the regulators specifying corrective actions to be taken, and initiating memorandums of understanding between regulators and banking institution officers on actions that are to be taken. Informal actions typically are used to advise banking institutions of noted weaknesses, supervisory concerns, and the need for corrective action. The regulators assume that banking institutions understand that if they do not comply with informal actions, regulators may take stronger enforcement actions. Under agency guidelines, the regulators use formal enforcement actions that are authorized in banking laws when informal actions have not been successful in getting management to address supervisory concerns, management is uncooperative, or the institution’s financial and operating weaknesses are serious and failure is more than a remote possibility. Formal enforcement actions generally include such actions as formal written agreements between regulators and bankers; orders to cease and desist unsafe practices or violations; assessments of civil money penalties; and orders for removal, suspension, or prohibition of individuals from banking institution operations. In addition, OCC and OTS may revoke national banking institutions’ charters and place institutions in conservatorship; FDIC may remove an institution’s deposit insurance. Under FDICIA, all insured banking institutions are to be examined once every 12 months by federal regulators. These examinations are to be conducted by the regulators with primary jurisdiction over the banking institutions. In addition, FDIC may conduct backup examinations of any bank, if necessary, for the purpose of protecting BIF. The full-scope examinations required under law are usually called safety-and-soundness examinations because their primary purpose is to assess the safety and soundness of a banking institution’s practices and operations. The objectives of these on-site examinations are to test and reach conclusions about the reliability of banking institutions’ systems, controls, and reports; investigate changes or anomalies disclosed by off-site monitoring and analysis; and evaluate those aspects of the institution’s operations for which portfolio managers cannot rely on the banks’ own systems and controls. Examinations have historically been extensive reviews of loan portfolios. Currently, according to officials with whom we spoke, regulators are moving toward a risk-management approach and concentrating on institutions’ risk profiles and internal controls. Examiners rate five critical areas of operations—capital adequacy (C), asset quality (A), management (M), earnings (E), and liquidity (L)—to determine an overall rating (CAMEL). They use a five-point scale (with one as the best rating and five as the worst) to determine a CAMEL rating that describes the condition of the institution. As a part of the examination process, regulators are to meet with banking institution officials after every examination. In addition, regulators are to hold separate meetings with the bank’s audit committee and management after each examination to discuss the results of the examination. FRS and OTS also conduct holding company inspections. Holding company inspections differ from bank examinations in that the focus of the inspection is to ascertain whether the strength of a bank holding company is being maintained and to determine the consequences of transactions between the parent company, its nonbanking subsidiaries, and the subsidiary banks. According to FRS and OTS guidelines, the major components of an inspection include an assessment of the financial condition of the parent company, its banking subsidiaries, and any nonbanking subsidiaries; a review of intercompany transactions and relationships; an evaluation of the current performance of the company and its a check of the company’s compliance with applicable laws and regulations. Examiners are to rate five critical areas of the bank holding company—bank subsidiaries (B), other nonbank subsidiaries (O), parent company (P), earnings (E), and capital adequacy (C)—to determine an overall rating referred to as BOPEC. Examiners use a five-point rating scale, similar to that used for CAMEL ratings on banks and thrifts. They also rate management separately as satisfactory, fair, or unsatisfactory. In addition to safety and soundness examinations, regulators are to conduct examinations of banking institutions focusing on compliance with various consumer protection laws and the Community Reinvestment Act (CRA). A consumer compliance examination results in a compliance rating for an institution’s overall compliance with consumer protection laws to ensure that the provision of banking services is consistent with legal and ethical standards of fairness, corporate citizenship, and the public interest. A compliance rating is to be given to the institution based on the numerical scale ranging from 1 for top-rated institutions to 5 for the lowest-rated institutions. Although the regulators may do a CRA compliance examination separately from a consumer compliance examination, officials from all four regulators said that they generally do both examinations at the same time. The purpose of the CRA examination is to evaluate the institution’s technical compliance with a set of specific rules and to qualitatively evaluate the institution’s efforts and performance in serving the credit needs of its entire community. The CRA examination rating consists of a four-part descriptive scale including “outstanding,” “satisfactory,” “needs to improve,” and “substantial noncompliance.” Under the Financial Institution Reform, Recovery, and Enforcement Act of 1989 (FIRREA), CRA was amended to require that the regulator’s examination rating and a written evaluation of each assessment factor be made publicly available—unlike the safety and soundness or compliance examination ratings, which are not made public by regulators. In addition to on-site examinations of banking institutions, each of the regulators engages in off-site monitoring activities. These activities—which generally consist of a review and analysis of bank-submitted data, including call reports, and discussions with bank management—are to help the regulators identify trends, areas of concern, and accounting questions; monitor compliance with requirements of enforcement actions; and formulate supervisory strategies, especially plans for on-site bank examinations. According to examination guidance issued by the regulators, off-site monitoring involves review and analysis of, among other things, quarterly financial reports that banks prepare for and submit to regulators and reports and management letters prepared for banking organizations by external auditors of banks. In general, meetings are not regularly held with banking institution management as part of normal off-site monitoring activities. If off-site monitoring reveals significant changes or issues that could have an impact on the bank, then examiners may meet with management or contact management by telephone to discuss relevant issues. Oversight agencies are focusing more on risk assessment in their off-site monitoring efforts. FDIC officials said that their off-site monitoring programs, such as quarterly reports and off-site reviews, help provide an early indication of a change in an institution’s risk profile. They also said that FDIC has developed new initiatives to improve identifying and monitoring risk. One initiative is the development of decision flowcharts that aid examiners in identifying risks in an institution as well as possible approaches to address them. Another initiative has included increasing the use of technology through the development of an automated examination package and expanding the access that examiners have to internal and external databases in order to provide relevant data to examiners prior to on-site examinations, enabling the examiner to identify specific risks areas. External auditors’ reports, originally prepared to ensure the accuracy of information provided to a banking organization’s shareholders, attest to the fairness of the presentation of the institution’s financial statements and, in the case of large institutions, to management’s assertions about the institution’s financial reporting controls and compliance with certain laws and regulations. Management letters describe important, but less significant, areas in which the banking institution’s management may need to improve controls to ensure reliable financial reporting. Supervisors generally require banking institutions that have an audit—regardless of the scope of the audit—to send the reports, including management letters and certain other correspondence, to the supervisor within a specified time period. Reviews of this information could lead examiners to focus on-site examinations on specific aspects of an institution—such as parts of an institution’s internal control system—or even to eliminate some procedures from the examination plan. The purposes of external audits and safety and soundness examinations differ in important respects and are guided by different standards, methodologies, and assumptions. Even so, external auditors and examiners may review much of the same information. To the extent that examiners could avoid duplicating work done by external auditors, examinations could be more efficient and less burdensome for financial institutions. Supervisors’ actual use of external auditors’ work has varied by agency as well as by individual examiner, according to supervisory officials we interviewed. Primary factors in limiting use, according to some officials we interviewed, include skepticism among examiners about the usefulness of the work of external auditors and concerns that the findings of an external audit could be outdated by the time the financial institution is examined by its federal supervisor. OCC and FDIC recently have undertaken initiatives to improve cooperation between external auditors and examiners and potentially to identify areas in which examiners could better use the work of external auditors. One impetus for improvement efforts was a 1995 report by the Group of Thirty—“Defining the Roles of Accountants, Bankers and Regulators in the United States.” This report recommended, among other things, joint identification by the accounting profession and regulators of areas of reliance on one another’s work; actions by independent audit committees to encourage interaction among regulators, external auditors, and banking institution management; routine use by examiners of audit workpapers; and a permanent board consisting of representatives from each of the federal banking agencies, SEC, the accounting profession, and the banking industry to recommend improvements in the relationship between regulators and external auditors. Regulatory officials we interviewed disagreed with some of the recommendations set out by the Group of Thirty report, and some officials said the report did not give sufficient credit to regulators’ past efforts to work with external auditors. However, regulators generally agreed that this report helped provide some needed momentum for their initiatives. In November 1995, OCC announced plans for a 1-year pilot program to promote greater cooperation between examiners and external auditors and reduce wasteful duplication and oversight burden. The program, which is to involve at least 10 large regional and multinational banks, is expected to result in nonmandatory guidelines on how and under what circumstances examiners and external auditors should work together and use each others’ work. Officials said that certain process-oriented functions where external auditors and examiners are tabulating or verifying the same information—such as documenting and flow-charting internal controls or confirming the existence and proper valuation of bank assets—may be an area where examiners could use the work of external auditors. FRS is also in the process of trying to establish procedures for cooperating more closely with external auditors. As of June 1996, FRS staff had prepared a draft recommendation for the FRS Board to explore opportunities to share information and analytic techniques with external auditors and to seek opportunities to benefit from the work of external auditors. According to FDIC officials, representatives of FDIC have regular meetings with external auditors, and examiners have also recently begun reviewing selected external auditors’ workpapers. Examiners we spoke with told us that information found in the workpapers can be useful because information considered immaterial for financial accounting purposes (which is therefore not discussed in the audit report) can be useful for regulatory purposes. They further found the auditors’ work useful for identifying issues needing management’s attention and providing indicators of management willingness or ability to address those issues. Finally, one of the most important benefits of this workpaper review, according to examiners, is that these reviews promoted expanded communication and interaction between examiners and external auditors and helped acquaint examiners and auditors with each other’s techniques, policies, procedures, and objectives. FDIC officials told us they plan to issue examiner guidance to implement procedures to expand their review of internal and external audit workpapers of institutions that have substantial exposure to higher risk activities, such as trading activities. Officials also said examiners will be expected to contact an institution’s auditor to solicit information that the auditor may have gained from his or her work at the institution since the last examination. Finally, they said that this guidance will require that all Division of Supervision Regional Offices institute a program whereby annual meetings are held between regulators and local accountants to informally discuss accounting, supervisory, and examination policy issues. According to industry officials, OTS—and its predecessor the Federal Home Loan Bank Board (FHLBB)—has had a long-standing history of working with external auditors, and its examiners frequently use the work of external auditors to adjust the scope of examinations. (See app. II for additional information on the use of external auditors in bank supervision.) Banking institutions have a choice of three chartering authorities: (1) state banking authorities, which charter state banks and thrifts and license state branches and agencies of foreign banks; (2) OTS, which charters national thrifts; and (3) OCC, which charters national banks and licenses federal branches and agencies of foreign banks. FRS and FDIC have no chartering authority. However, according to FDIC, all deposit-taking institutions are required to apply to FDIC for federal deposit insurance before they are chartered. Thus, FDIC may have a powerful influence over chartering decisions. Although the authority to charter is limited, each regulator has responsibility for approving mergers, branching, and change-of-control applications. FRS, FDIC, and OTS share their authority to approve branching and mergers of banking institutions under their jurisdictions with state authorities, while OCC alone reviews national bank branch and merger applications. FRS is responsible for approving bank holding company mergers even though the major banking institutions in the merging holding companies may be supervised by OTS, OCC, or FDIC. Likewise, OTS approves thrift holding company mergers. As described in chapter 1, in addition to their primary bank oversight functions, FRS and FDIC have other major responsibilities that include administration of the federal deposit insurance funds; failed or failing bank resolution; and asset disposition for FDIC and, for FRS, monetary policy development and implementation, payments and settlements systems operation and oversight, and liquidity lending. FRS and FDIC officials told us that to fulfill their duties, they rely on information obtained under their respective supervisory authorities. FRS officials said that to carry out their responsibilities effectively, they must have hands-on supervisory involvement with a broad cross-section of banks. FRS officials also said that the successful handling of financial crises often depends upon a combination of the insights and expertise gained through banking supervision and those gained from the pursuits of macroeconomic stability. Experience suggests that in times of financial stress, such as the 1987 stock market crash, FRS needs to work closely with the Department of the Treasury and others to maintain market stability. As we have pointed out in the past, the extent to which FRS needs to be a formal supervisor of financial institutions to obtain the requisite knowledge and influence for carrying out its role is an important question that involves policy judgments that only Congress can make. Nevertheless, past experience, as well as evidence from the five foreign oversight structures we studied (see ch. 3 for further discussion) provides support for the need for FRS to obtain direct access to supervisory information. In its comment letter, FRS stated that it needs active supervisory involvement in the largest U.S. banking organizations and a cross-section of others to carry out its key central banking functions. FDIC officials said that their formal supervisory responsibility enables them to maintain staff that can supervise and assess risk. In their view, this gives FDIC the expertise it requires when it needs to intervene to investigate a problem institution. In addition, FDIC officials said that the agency’s supervision of healthy institutions is useful because it increases their awareness of emerging and systemic issues, enabling them to be proactive in carrying out FDIC’s insurance responsibilities. In its comment letter, FDIC reiterated its need for information on the ongoing health and operations of financial institutions and stated that periodic on-site examination remains one of the essential tools by which such information may be obtained. Under FDICIA, FDIC was given backup examination and enforcement authority over all banks. On the basis of an examination by FDIC or the appropriate federal banking agency or “other information,” FDIC may recommend that the appropriate agency take enforcement action with respect to an insured depository institution. FDIC may take action itself if the appropriate federal banking agency does not take the recommended action or provide an acceptable plan for responding to FDIC’s concerns and if FDIC determines that the institution is in an unsafe or unsound condition, the institution is engaging in unsafe or unsound practices and the action will prevent it from continuing those practices, or the institution’s conduct or threatened conduct poses a risk to the deposit insurance fund or may prejudice the interests of depositors. We are on record as favoring a strong, independent deposit insurance function to protect the taxpayers’ interest in insuring more than $2.5 trillion in deposits. Previous work we have done suggests that a strong deposit insurance function can be ensured by providing FDIC with (1) the ability to go into any problem institution on its own, without having to obtain prior approval from another regulatory agency; (2) the capability to assess the quality of bank and thrift examinations, generally; and (3) backup enforcement authority. As described in chapter 1, Treasury also has several responsibilities related to bank oversight, including being the final decisionmaker in approving an exception to FDIC’s least-cost rule and a principal participant in the development of financial institution legislation and policies. These responsibilities require that Treasury regularly obtain information about the financial and banking industries, and, at certain times, institution-specific information. According to Treasury officials, Treasury’s current level of involvement, through its housing of OCC and OTS and their involvement on the FDIC Board of Directors, and the information it receives from the other agencies, like FDIC and FRS, as needed, is sufficient for it to carry out these responsibilities. For example, according to Treasury, officials at OCC and OTS meet regularly with senior Treasury officials to discuss general policy issues and market conditions. In addition, the Secretary of the Treasury meets regularly with the FRS Chairman, and other senior Treasury officials meet regularly with members of the FRS Board. Furthermore, Treasury officials are in frequent contact with FDIC officials about issues relevant to both organizations. Analysts, legislators, banking institution officials, and numerous past and present regulatory agency officials have identified weaknesses and strengths in the structure of the federal bank oversight system. Some representatives of these groups have broadly characterized the federal system as redundant, inconsistent, and inefficient. Some banking institution officials have also raised concerns about negative effects of the structure on supervisory effectiveness. At the same time, some agency and institution officials have credited the current structure with encouraging financial innovations and providing checks and balances to guard against arbitrary oversight decisions or actions. A principal concern associated with four regulators essentially conducting the same oversight functions for various segments of the industry is that the system is inefficient in numerous respects. For example, each agency has its own internal support and administrative functions, such as facilities, data processing, and training to support the basic regulatory and supervisory tasks it shares with three other agencies. Concerns about inefficiency have also been raised by banking industry officials and analysts because a number of federal regulatory agencies may oversee the banking and nonbanking subsidiaries in a bank holding company. Inefficiencies could result to the extent that the regulator responsible for supervision of the holding company itself, FRS, might duplicate work done by the primary regulator of the holding company subsidiaries—that is, OTS, OCC, or FDIC. According to SEC officials, another area of potential inefficiency is the lack of uniform regulations of bank securities activities. For example, banking institutions that are not part of a holding company are exempted from SEC filing requirements, such as registering their securities offerings and making periodic filings with SEC. This means that there is a duplication of expertise that both SEC and the federal banking institutions’ regulators must develop and maintain to oversee securities offerings and related activities. Overlapping authority and responsibility for examination of subsidiaries could also have the effect of clouding accountability to Congress in cases of weaknesses in oversight of such subsidiaries. According to testimony by the Comptroller of the Currency in 1994, “it is never entirely clear which agency is responsible for problems created by a faulty, or overly burdensome, or late regulation.” Regulators have also raised concerns about FDIC’s backup examination authority. The backup authority remains open to interpretation and, according to regulatory officials, gives FDIC the authority to examine banking institutions regardless of the examination coverage or conclusions of the primary regulator. Regulatory officials said that they were concerned about FDIC’s backup authority because of the possible duplication of effort and the resulting regulatory burden on the affected banks. FDIC’s Board of Directors has worked with FDIC officials in efforts to establish a policy statement clarifying how this authority will be applied in order to avoid inefficiency or undue burden while allowing FDIC to safeguard deposit insurance funds. Regulators, banking officials, and analysts alike assert that the multiplicity of regulators has resulted in inconsistent treatment of banking institutions in examinations, enforcement actions, and regulatory decisions, despite interagency efforts at coordination. For example, in previous studies, we have identified significant inconsistencies in examination policies and practices among FDIC, OCC, OTS, and FRS, including differences in examination scope, frequency, documentation, loan quality and loss reserve evaluations, bank and thrift rating systems, and examination guidance and regulations. To address some of these problems, the federal agencies have operated under a joint policy statement since June 1993 designed to improve coordination and minimize duplication in bank examination and bank holding company inspections. According to OTS, the oversight agencies have adopted a common examination rating system and have improved coordination of examinations, and some conduct joint examinations when feasible. Some of the differences among banking institution regulators result from differences in the way they interpret and apply regulations. Banking officials told us that the agencies sometimes apply different rules to similar situations and sometimes apply the same rules differently. A 1993 Congressional Budget Office (CBO) study cited frequent disagreements between OCC and FRS on the interpretation of laws governing the permissible activities of national banks. These disagreements resulted in a failed attempt by FRS to prevent one national bank from conducting OCC-approved activities in a bank subsidiary. The CBO study also detailed historical differences between the two agencies in other areas, such as merger approvals. In addition to interpreting regulations differently, the regulatory agencies sometimes enforced them differently as well. For example, we observed that regulatory agencies have given different priority to enforcing consumer protection and community lending legislation. Similarly, in our examination of regulatory impediments to small business lending we also found that the agencies had given conflicting advice to their institutions about the procedures for taking real estate as collateral to support traditional small business working capital and equipment loans. Inconsistency among the regulators in examinations as well as in interpreting, implementing, and enforcing regulations may encourage institutions to choose one charter over another to take advantage of these differences. For example, a merger of banking institutions with differing charters may be purposefully structured to place the application decision with the agency deemed most likely to approve the merger and expand permissible activities. According to some former agency officials, a regulatory agency’s desire to maintain or increase the number of institutions under its jurisdiction could inhibit the agency from taking the most appropriate enforcement action against an institution because that action could prompt a charter switch. Although the statutory mandates that define responsibilities of federal regulators help produce a common understanding of the principal goals of bank regulation, bank regulators may prioritize these goals differently, according to the mission of the particular regulatory agency, among other factors. As a result, a banking organization overseen by more than one of the regulators can have different, and sometimes conflicting, priorities placed on its institutions. Various functions within an agency may also differ in the priority they assign oversight goals. For instance, safety-and-soundness examiners from one agency focus on the goals of safety and soundness and the stability of the system and may emphasize high credit standards that could conflict with community development and investment goals. Other examiners from the same agency focus on consumer protection and community reinvestment performance of banking institutions. According to industry officials, the two types of examiners may have different priorities when assessing banking institution activities, even though each represents the same regulatory agency. As a result, industry officials have said that they are sometimes confused about how consistently the goals are applied to individual institutions as well as across the industry. Coordination among regulators to ensure consistent regulation and supervisory policies has been encouraged by Congress in FIRREA and FDICIA and, according to agency officials, has taken place through the Federal Financial Institutions Examination Council (FFIEC), various interagency committees or subcommittees, interagency task forces or study groups, or through agency officials working together. Many joint policies and regulations have been developed in this way. Currently, for example, according to several of the oversight agencies, the federal agencies are working to develop consistent regulations and guidelines that implement common statutory or supervisory policies, pursuant to Section 303 of the Riegle Community Development and Regulatory Improvement Act. How they are to coordinate and the degree to which coordination takes place is to be decided on a case-by-case basis. Although acknowledging the need for agency coordination, bank oversight officials have said that efforts to develop uniform policies and procedures—regardless of the coordination means used—can take months, involve scores of people, and still fail to result in uniformity. Further, they said the coordination process has often caused long delays in decisions on important policy issues. Implementation of FDICIA is such a case. Numerous staff from each of the regulatory agencies were involved over an extended period. However, despite this effort, the agencies missed the statutory deadline for the noncapital tripwire provision authorizing closure of banking institutions even when they still have positive capital levels (section 132 of the act) by several months. In addition, banking institution officials have stated that efforts to coordinate have usually led to what too often becomes the “least common denominator” agreement rather than more explicit uniform regulatory guidance. Certain aspects of the U.S. banking oversight structure may also negatively affect regulatory effectiveness. According to FRS testimony, as of April 30, 1996, about 60 percent of the nation’s bank and thrift organizations were supervised by at least two different federal banking agencies. Some holding companies may be subject to oversight by three or all four of the federal oversight entities (see fig. 2.2). The overlapping authority in bank holding company supervision has sometimes been a problem, according to regulatory officials, because each regulator examines only a segment of the holding company and so must rely upon other regulators for information about the remaining segments. Banking officials have said this not only results in a fragmented approach to supervising and examining institutions but also ignores how the banking organization operates and hinders regulators from obtaining a complete picture of what is going on in the organization. According to these officials, the regulatory structure may result in potential blind spots in supervisory oversight and, therefore, may not be the most effective way to guard against risk to banking institutions or the banking system as a whole. Work that we have done supports these assessments. Although banking officials have acknowledged weaknesses in the structure of the U.S. bank oversight system, they have also found strengths. For example, some regulatory officials believe that regulatory monopolies or single regulators run the risk of being inflexible and myopic; are slow to respond to changes in the marketplace; and, in the long term, are averse to risktaking and innovation by banking institutions. These officials have stated that having multiple federal regulators in the U.S. system has resulted in the diversity, inventiveness, and flexibility in the banking system that is important for responding to changes in market share and in technology. These officials consider the present system to be flexible enough to allow market-driven changes and innovations. The same officials have said that the present system of multiple regulators—with the ability of banking institutions to change charters—provides checks and balances against arbitrary actions and rigid and inflexible policies that could stifle healthy growth in the banking industry. On the basis of the extensive work we have done in areas such as bank supervision, enforcement, failure resolution, and innovative financial activities—such as derivatives—we have previously identified four fundamental principles that we believe Congress could use when considering the best approach for modernizing the current regulatory structure. We believe that the federal bank oversight structure should include consolidated and comprehensive oversight of companies owning federally insured banks and thrifts, with coordinated functional regulation and supervision of individual components; independence from undue political pressure, balanced by appropriate accountability and adequate congressional oversight; consistent rules, consistently applied for similar activities; and enhanced efficiency and as low a regulatory burden as possible consistent with maintaining safety and soundness. Aspects of bank oversight systems in Canada, France, Germany, Japan, and the United Kingdom (U.K.) may be useful to consider when addressing bank oversight modernization. All of the foreign systems had fewer total entities overseeing banking institutions than did the U.S. system of bank oversight—ranging from one (U.K.) to three (France). No more than two oversight entities in the foreign countries were responsible for any single major oversight activity—chartering, regulation, supervision, or enforcement. In all five countries we studied, banking organizations typically were subject to consolidated oversight, with one oversight entity being legally responsible and accountable for the entire banking organization, including its banking and nonbanking subsidiaries. The oversight systems in the countries we reviewed generally included roles for both central banks and finance ministries. This reflects a close relationship of traditional central bank responsibilities with oversight of commercial banks as well as the national government’s ultimate responsibility to maintain public confidence and stability in the financial system. At the same time, most of the foreign countries incorporated checks and balances to guard against undue political influence and to ensure sound supervisory decisionmaking. The other countries’ deposit insurers had narrower roles than that of FDIC and often were not government entities. Finally, foreign systems incorporated a variety of mechanisms and procedures to ensure consistent oversight and improve efficiency. Compared to the U.S. bank oversight structure, with four federal agencies performing many of the same oversight functions, the other countries’ structures looked less complex (see table 3.1 for a brief overview of the other countries’ oversight systems). The total number of bank oversight entities in each of the countries we studied ranged from one (U.K.) to three (France). At one end of the spectrum was the Bank of England, which performed all bank oversight functions. At the other end, in France, were the three independent decisionmaking committees—chartering, regulating, and supervising—all of which were supported by central bank staff. The foreign systems also had fewer oversight entities engaged in chartering, regulation, supervision, and enforcement activities compared to the U.S. system. Although all four U.S. agencies issue rules, conduct examinations, and take enforcement actions—OCC and OTS are the only federal chartering authorities in the United States—the foreign systems had authorized no more than two agencies to perform each of those functions. In each of the countries we studied, chartering of commercial banking institutions was the responsibility of only one entity. This differs markedly from the U.S. system, in which banking institutions may be chartered by state banking commissions, OTS, or OCC. As in the United States, the chartering entities in the other countries assessed applications on the basis of several factors. The most universal of the factors considered were the adequacy of capital resources and the expertise and character of financial institution management. In the United States, as noted in chapter 1, a banking institution’s federal oversight agency is largely determined by the institution’s charter, and under most circumstances an institution may switch its charter in order to come under the jurisdiction of an agency it may favor. Such switching of regulators is not a possibility in the countries we studied. In contrast to the U.S. system, in which each of the four banking institution oversight entities is generally authorized to issue its own regulations or regulatory guidelines, responsibility for issuing regulations in the countries we studied was usually limited to one entity. In France, this responsibility was assigned to the Bank Regulatory Committee; in Germany, to the Federal Bank Supervisory Office (FBSO); in Japan, to the Ministry of Finance; and in the U.K., to the Bank of England. In Canada, however, the bank supervisor and the deposit insurer were both authorized to issue regulations or standards. The insurer had the authority to issue standards pertaining to its operations and functions and those of its members. To guard against monolithic decisionmaking, the regulatory processes in all five countries were designed to include the views of other agencies involved in bank, securities and insurance oversight, and those of the regulated industry. The single-regulator approach in four of the foreign countries we studied and the coordination of regulation between the federal regulator and the deposit insurer in Canada meant that in all five countries, all banking institutions conducting the same lines of business were subject to the same safety and soundness standards, including rules related to permissible activities. This contrasts with the four regulator system in the United States, as discussed in chapter 2. In the countries we studied, major supervisory activities were never shared by more than two entities. For purposes of our analysis, we defined these activities as (1) monitoring banks’ financial condition and operations through on-site examinations or inspections, (2) monitoring through the collection and analysis of data in reports filed by banks and through meetings with bank officials and others, and (3) enforcing laws and regulations through formal or informal actions. In Canada, both the bank supervisor and the deposit insurer performed supervisory duties. In France, the supervisory duties were performed by the committee called the Banking Commission; in Germany, by the federal bank supervisor and the central bank; in Japan, by the Ministry of Finance and the Bank of Japan; and in the U.K., by the Bank of England. In four of the five countries we studied, the responsibility for taking formal enforcement actions was limited to one supervisor. For instance, in the U.K., the Bank of England was solely responsible for formal enforcement actions. In Germany, the federal supervisor was responsible for enforcement actions; in France, the Banking Commission; and, in Japan, the Ministry of Finance in Japan. Canada’s deposit insurer could take specific, narrowly defined enforcement actions to protect the deposit insurance fund, such as levying a premium surcharge on individual members or terminating an insured institution’s deposit insurance. Most of the other countries’ bank supervisors said they conducted on-site examinations less frequently than U.S. bank supervisors, and they said that the examinations conducted were often narrower in scope than U.S. examinations. In France, on-site examinations were conducted on average less frequently than every 4 years, depending on the institutions being examined. In Japan, examinations were conducted approximately every 1 to 3 years. Canada’s frequency of on-site examinations, like that of U.S. supervisors’, was to be once a year. Supervisors in Germany and the U.K. said they relied on information collected for them by external auditors rather than conducting their own regularly scheduled on-site examinations. In the three countries that conducted regular on-site examinations, the examinations were to primarily assess the safety and soundness of bank operations and verify the accuracy of data submitted for off-site monitoring purposes. Special purpose examinations, in Canada and elsewhere, were also to be conducted across the industry to determine how specific issues—such as corporate governance—were being handled across the banking system. In monitoring the financial conditions and operations of banks, most of the supervisory entities in other countries said they generally relied more extensively than supervisors in the United States on off-site information, primarily information in periodic reports submitted by banking institutions. Reporting by banks included information on assets, liabilities, and income, as is the case in the United States, as well as more detailed information. In France, for example, the Banking Commission had implemented a new reporting system for credit institutions for the purpose of collecting and analyzing information for prudential, monetary, and balance of payments purposes. The system was intended to provide an early warning of potential problems in individual banks or in the banking system as a whole. Indicators of potential safety and soundness problems were typically to be discussed with bank officials, whether in meetings or correspondence, and could trigger an on-site examination. Banks in several of the countries were also required to submit information on their major credit exposures, which the regulators could analyze for excessive growth or concentrations that might indicate safety and soundness problems for either the individual bank or the banking system. Other important sources of information included meetings with bank management. For example, supervisors said they often met with management to follow up on information collected through their off-site monitoring. Such meetings could include questions about potential informational discrepancies and any business implications, or they could provide an opportunity for discussions about the institution’s operations. In three countries (Canada, Germany, and the U.K.), work performed by banks’ external auditors also contributed significantly to supervisory information (see discussion below on the contribution of external auditors to bank supervision). As discussed in chapter 2, U.S. federal bank supervisors also monitor the condition of banks using information contained in periodic reports and discussions with bank management. However, U.S. regulators do not collect some of the information that is used for risk assessment purposes overseas, such as the reporting of large credit exposures. As has often been true in the United States, supervisors in each of the countries we reviewed said they preferred to rely principally on informal enforcement actions, such as warnings or persuasion and encouragement. Informal actions generally were regarded by supervisors as easier and faster to put into effect and sufficiently flexible to ensure that the institutions took timely corrective actions. Supervisors also told us that banking institutions understood that if they did not comply with informal actions and recommendations, formal actions were sure to follow. While authorization to take formal actions in most of the foreign countries was limited to the primary supervisor, informal actions sometimes could be taken by more than one oversight entity. In Germany, for example, the central bank could suggest to banks remedies for perceived shortcomings and recommend enforcement actions to the federal supervisor. In Japan, the Bank of Japan also could recommend informal enforcement actions, such as suggested remedies to perceived problems. In Canada, the deposit insurer could recommend enforcement actions to the supervisor as well as take some limited enforcement actions on its own if the insurance fund was considered at risk. The financial services industries in the five countries have, over time, experienced serious failures, control problems, or other financial difficulties that have resulted in significant changes or at least the consideration of such changes to bank oversight structures. These changes include a strengthened on-site examination capability and an increased formality in the supervisory process and use of enforcement actions in several countries. In the five countries we studied, banking organizations typically were subject to consolidated oversight, with an oversight entity responsible and accountable for an entire banking organization, including banking and nonbanking subsidiaries. For instance, if a bank had nonbank subsidiaries regulated by securities or insurance regulators, bank regulators nonetheless were responsible for supervisory oversight of the bank as a whole. The bank regulators would generally rely on the nonbank regulators’ expertise in overseeing the bank’s subsidiaries. For example, in France, the Banking Commission was responsible for the supervision of the parent bank and the consolidated entity, even though securities or insurance activities in bank subsidiaries were the responsibility of other regulators in those areas. In Canada, the federal supervisor was responsible for all federally incorporated financial institutions, such as banks, insurance companies, and trust companies. Securities subsidiaries of banks were the responsibility of provincial securities regulators who shared information with the bank regulator for purposes of consolidated oversight. Regulators in the U.K. also operated under the consolidated oversight approach. For a bank that owned nonbank subsidiaries, the Bank of England remained the lead regulator and had responsibility for the entity as a whole. However, it relied on the expertise of securities and insurance supervisors to provide information on subsidiaries conducting such activities. If the major top-level entity was a securities firm that owned a bank, then the securities regulator was the lead regulator of the entire entity and would rely on the bank regulator for information about the bank. If banks conducted securities or other activities within the bank department rather than in a nonbank subsidiary, then the bank regulator retained supervisory responsibility. In Germany, for example, where universal banks were able to conduct an array of activities from deposits to securities activities within the banking institutions, the federal supervisor was responsible for all bank and nonbank activities conducted within a bank. The oversight systems in the countries we reviewed generally included roles for both central banks and finance ministries, reflecting the close relationship of traditional central bank responsibilities with oversight of commercial banks as well as the national government’s ultimate responsibility to maintain public confidence and stability in the financial system. Central banks generally played significant roles in supervision and regulatory decisionmaking in the countries we studied, largely based on the premise that central bank responsibilities for monetary policy and other functions, such as crisis intervention, oversight of clearance and settlements systems, and liquidity lending, are interrelated with bank oversight. Although no two countries had identical structures for including central banks in bank oversight, they each accorded their central banks roles that ensured access to, and certain influence over, the banking industry. The central bank’s role was most direct in the U.K., where the Bank of England had sole responsibility for the authorization, regulation, and supervision of banks. Canada had a far less direct role for its central bank in supervision and regulation. Even so, the Bank of Canada influenced supervisory and regulatory decisionmaking as a member of (1) the deposit insurance board; (2) the Financial Institutions Supervisory Committee, an organization established to enhance communication among participants in financial institution regulation and supervision; and (3) the Senior Advisory Committee, which was to meet to discuss major policy changes or legislative proposals affecting bank oversight. However, it had no direct authority over supervisory or regulatory decisionmaking. In France, Germany, and Japan the central bank was one of two principal oversight agencies, but the countries had different structures for involving the central banks in bank oversight. In Germany, the primary supervisor, not the central bank, was authorized to issue banking regulations and, with few exceptions, issue or revoke bank licenses and take enforcement actions against banks. However, a sharp contrast existed between the legally assigned responsibilities of the central bank and its de facto sharing of oversight responsibilities with the federal bank supervisor. The central bank and the federal bank supervisor worked closely together and were considered partners in the formulation of regulatory and supervisory policies. The supervisor was to consult the central bank about all regulations; the central bank was substantively involved in the development of most of the regulations and could veto some. It also had the most active role in day-to-day bank supervision of banks and was very influential in determining the enforcement actions to be taken by the federal bank supervisor. The influence of the central bank in bank oversight arises from its detailed knowledge about banks in Germany, certain legal requirements that it be consulted before supervisory or regulatory action was taken, and the general perception that its nonoversight responsibilities were closely linked with bank oversight. The central bank of France was also very involved in bank oversight, but the structural basis for its involvement differed significantly from that in Germany. The decisionmaking responsibilities for supervision and regulation of banking institutions in France were divided among three different but interrelated oversight committees: one for chartering, one for regulation, and one for supervision. The Bank of France was a member of each of these committees. Its influence over bank oversight stemmed from its chairmanship of two of the three oversight committees—the committee for chartering and the committee for supervision (the Banking Commission); the fact that it staffed all three oversight committees and the examination teams; its authority in financial crises; and its importance in and influence over French financial markets. The Japanese central bank also had some oversight responsibilities derived principally from the contractual agreements it made with financial institutions that opened accounts with the Bank of Japan—including all commercial banks. As a result, it examined these banks on a rotational basis with the Ministry of Finance and also met regularly with bank management. Although only the Ministry had the legal authority to take formal enforcement actions, the central bank provided guidance that banks usually interpreted as binding. In all of the countries we reviewed finance ministries were included in oversight structures, although their roles varied. In some countries, the bank supervisors reported to the finance ministries and the finance ministries had final approval authority for regulations or enforcement actions. In other cases, the finance ministry acted as the principal supervisor or a representative of the finance ministry participated as a member of a decisionmaking committee. In most countries, the finance ministries received industrywide information to assist in discharging fiscal policy and other responsibilities. They often did not receive bank-specific information unless the regulator believed an institution to be a potential threat to system stability. In such situations, the finance ministry was to be apprised for crisis management and information purposes, as were the central bank and deposit insurer in order to ensure each could effectively carry out its respective responsibilities. In Canada and Germany, the principal bank supervisor reported to the Minister of Finance. The oversight entities that reported to the finance ministries said that on day-to-day issues they had a significant amount of independence—the government was generally informed only of key regulatory or supervisory decisions. However, the agreement of the finance ministry was usually necessary for these decisions to be carried out. In France, the Ministry of Economic Affairs was represented on each of the three independent oversight committees and chaired one of them. According to oversight and banking officials with whom we spoke, its influence over bank oversight was derived primarily from its chairmanship of the bank regulatory committee and its membership on the chartering and oversight committees, as well as from its position of power in the French cabinet, including its powers of final approval with regard to bank regulations. In Japan, the Minister of Finance was the formal supervisor of banking institutions. It was solely responsible for chartering banking institutions, taking formal enforcement actions, and developing and issuing regulations. In addition, it also examined banks and conducted off-site monitoring. In the U.K., the Bank of England reports to the Chancellor of the Exchequer, who heads the Treasury. The Treasury has no formal role in banking supervision, although it would expect to be consulted on any major regulatory or supervisory decision. The Chancellor does have the power to issue directions to the Bank of England after consultation with the Governor of the Bank, ensuring that the government would have the final say in the event of a disagreement. Historically, the Bank of England has been accorded a high degree of independence in bank regulation and supervision. Other countries’ systems of bank oversight incorporated various checks to guard against undue political influence in bank oversight and to ensure sound decisionmaking. These checks included shared responsibilities and decisionmaking and the involvement of banking institutions in the development of bank oversight policies and other decisionmaking. According to Canadian officials, a degree of overlapping authority of the federal supervisor and the deposit insurer (whose governing board is to include four directors from the private sector) plays a useful role in ensuring integrity in bank oversight. For example, the independent assessments of the deposit insurer could provide a constructive second look at the bank supervisor’s oversight practices. Similarly, the interactions of the supervisor with banking institutions could help the insurer assess risks of particular banking practices. Finally, the federal supervisor is required to consult extensively with banking industry representatives in developing regulations and guidelines. In Canada, the large size and small number of banks enabled banks to be influential players in the financial system, according to supervisory and central bank staff. The large banks believed they had a special responsibility for helping to ensure the stability of the financial system, as well as a self-interest in that stability. We were told by management of some of the major banks that they often related concerns and offered comments about other banks or financial institutions to the federal supervisor or the central bank. In France, a rationale for the committee oversight structure—with the Bank of France and the Ministry of Economic Affairs participating jointly on the committees—was to ensure that no single individual or agency could dominate or dictate oversight decisionmaking, according to Bank of France officials. In addition, the committee structure ensures that the interests of banks are represented. Each of the three bank oversight committees includes four members, including representatives of the banking industry, drawn from outside the Bank of France and the Ministry of Economic Affairs. In Germany, the decisionmaking power of the politically accountable federal bank supervisor was checked by the participation in bank oversight of the very independent central bank. Without the central bank’s accord, very few, if any, important supervisory or regulatory actions would be taken. The central bank’s express approval was legally required for certain regulations, such as those affecting liquidity and capital requirements, to take effect. In addition, the federal supervisor was required by law to consult with banking associations when changes to banking law or regulations were being considered and before banking licenses were issued. In Japan, the Ministry of Finance typically developed policy by consensus, according to Ministry officials—a process that usually involved the input of many parties, such as the central bank, other government agencies, industry groups, and governmental policy councils. In addition, the Japanese central bank’s participation in bank oversight could provide a second opinion on some oversight issues. In the U.K., the Banking Act of 1987 formally established an independent body, known as the Board of Banking Supervision, to bring independent commercial banking experience to bear on banking supervisory decisions at the highest level. In addition to three exofficio members from the Bank of England, the Board’s members are to include six independent members who are to advise the exofficio members on policymaking and enforcement issues. If the Bank decides not to accept the advice of the independent members of the Board, then the exofficio members are to give written notice of that fact to the Chancellor of the Exchequer. Deposit insurers in the countries we studied generally had more narrow roles than that of FDIC. This less substantial oversight role may be attributable to the fact that national governments provided no explicit guarantees of deposit insurance and that deposit insurers were often industry administered. The foreign deposit insurers we studied did not have a role in bank oversight as substantial as FDIC’s. As discussed in chapter 1, FDIC is the administrator of federal deposit insurance, the primary federal regulator and supervisor for state-chartered banks that are not members of FRS, and the entity with primary responsibility for determining the least costly resolution of failed banks. In most countries, by contrast, deposit insurers were viewed primarily as a source of funds to help resolve bank failures—either by covering insured deposits or by helping to finance acquisitions of failed or failing institutions by healthy institutions.Supervisory information was generally not shared with these deposit insurers, and resolution decisions for failed or failing banks were commonly made by the primary bank oversight entities with the insurer frequently involved only when its funds were needed to help finance resolutions. The broader role of FDIC as compared to deposit insurers in other countries may be attributable in part to the fact that deposit insurance is federally guaranteed in the United States. For example, FDIC’s involvement in bank resolutions—particularly its responsibility to determine the least costly of resolution methods—helps protect the interests of both the industry and potentially of taxpayers when a bank fails. None of the governments of the other countries we studied provided such an explicit guarantee. Four of the five deposit protection programs—Germany is the exception—also provide less coverage than does the U.S. system. In Germany and France, deposit protection systems were administered by banking associations, with no direct government involvement. The German commercial banking association administered Germany’s deposit protection plan for commercial banks. The association obtained independent information about its members through external audits conducted by an accounting firm affiliate. It also could play a significant role in resolving troubled institutions. It had the power to intervene and attempt to resolve a member bank’s difficulties and could be pressured by the central bank or bank supervisor to do so. Thus, the German banking industry generally resolved its own problems. In France, the deposit protection system—a loss-sharing agreement among member banks—was administered by the French Bank Association. The French Bank Association itself played a relatively minor role in resolving bank problems. Instead, the Banking Commission was responsible for resolving troubled institutions. In the U.K. and Japan, the responsibility for the administration of deposit insurance was shared by government and the banking industry. Deposit insurers were independent bodies whose boards of directors were headed by government officials and included members from the banking industries. In these countries, the government, not the banking associations, resolved banking institutions’ problems. Canada’s oversight system was most similar to that of the United States. The Canadian deposit insurer did not act as a primary supervisor for any banking institutions; however, like FDIC, it had examination and rulemaking authority—although its powers were more limited than those of FDIC’s. It could take limited enforcement action and was represented on two of Canada’s oversight-related committees. The Canadian deposit insurer generally relied on the primary banking supervisor for examination information it needed to safeguard insurance funds. Until a financially troubled institution was declared insolvent and was placed in liquidation, the bank supervisor had the lead role in resolving that institution. However, the supervisor was to continuously inform the deposit insurer of the institution’s status. The deposit insurer could order a special examination to determine its exposure and possible resolution options if the institution failed. In the case of a failure, the deposit insurer was responsible for developing resolution alternatives and for implementing the chosen resolution plan. Most of the foreign structures with multiple oversight entities incorporated mechanisms and procedures that could ensure consistent and efficient oversight. Some countries relied on the work of external auditors, at least in part, for purposes of efficiency. Unlike in the United States, bank oversight in these countries generally did not include consumer protection or social policy issues. Coordination mechanisms designed to ensure consistency and efficiency in oversight in the countries we studied included oversight committees or commissions with interlocking boards, shared staff, and mandates or mechanisms to share information and avoid duplication of effort. In Canada, the federal bank supervisor, central bank, and finance ministry each had a seat on the deposit insurer’s board of directors and participated with the deposit insurer on various advisory committees. Also, the Canadian deposit insurer, which had backup supervisory authority to request or undertake special examinations of high-risk institutions, was required to rely for much of its information on the primary supervisor, whose examiners conducted all routine bank examinations and engaged in other data collection activities. In France, central bank employees staffed all three committees charged with oversight responsibilities for chartering, rulemaking, and supervision. In addition, the central bank and the Ministry of Economic Affairs were represented on each of the three oversight committees. In Germany, the central bank and the federal bank supervisor used the same data collection instruments. They were also legally required to share information that could be significant in the performance of their duties. Bank supervisors in three of the five countries whose systems we reviewed used the work of the banks’ external auditors as an important source of supervisory information. In the most striking contrast with the United States’ system, supervisors in Germany and the U.K. used external auditors as the primary source of monitoring information. In Canada, as in the United States, the primary supervisor conducted examinations; information from the banks’ external auditors was to be used to supplement and guide these examinations. Supervisors in all three countries recognized that auditors’ objectives for reviewing a bank’s activities could differ from those of a supervisor, and they also recognized that a degree of conflict could exist between the external auditors’ responsibilities to report to both their bank clients and to the bank supervisory authorities. However, they generally believed that their authority over auditors’ engagements was sufficient to ensure that the external auditors properly discharged their responsibilities and openly communicated with both their bank clients and the oversight authorities. In both Germany and the U.K., supervisors’ use of external auditors’ work was adopted at least in part for purposes of efficiency. In Germany, the use was part of an explicit plan to minimize agency staffing and duplication of effort between examiners and auditors. In the U.K., the use was seen as the most efficient way of introducing the necessary checks on systems controls and as a method compatible with the Bank of England’s traditional approach of supervising banks “based on dialogue, prudential returns, and trust,” according to Bank of England officials. Canada, Germany, and the U.K. differed from the United States in three other important ways: All banking institutions in the three foreign countries were required to have external audits. As discussed in chapter 2, large U.S. banks are required by U.S. oversight agencies to have external audits, and others are encouraged to do so. Bank supervisors in the three foreign countries had more control than U.S. bank supervisors over the work performed by external auditors. In Germany and the U.K., external audits were conducted using specific guidelines developed by the bank regulators, and the scope of individual audits could be expanded by all three regulators, or special audits ordered, to address issues of regulatory concern. By contrast, U.S. supervisors have more limited authority over the scope of external audits. External auditors in the three foreign countries had affirmative obligations to report findings of concern to supervisors. In Canada, external auditors are required to report simultaneously to the institution’s CEO and the bank supervisor anything discovered that might affect the viability of the financial institution. In Germany, external auditors are required by law to immediately report to the bank supervisor information that might result in qualification of the report or a finding of a significant problem. In the U.K., external auditors are required to report to the central bank any breaches in the minimum authorization criteria as well as expectations of a qualified or adverse report. In the United States, however, external auditors are required merely to notify the appropriate banking agency if they withdraw from an engagement. External auditors are required to withdraw from an audit engagement if identified problems are not resolved or if bank management refuses to accept their audit report. Further detail about the role of external audits in U.S. bank supervision is provided in appendix II. Bank oversight in the countries we studied, was focused almost exclusively on ensuring the safety and soundness of banking institutions and the stability of financial markets and generally did not include consumer protection or social policy issues. The national governments of the countries we studied used other mechanisms to address these issues or to promote these goals. Consumer protection and antidiscrimination concerns were addressed in many of the other countries by industry associations and government entities other than bank regulators and supervisors. In addition, some of the policy mechanisms used to encourage credit and other services in low- and moderate-income areas in these countries included the chartering of specialized financial institutions and direct government subsidies for programs to benefit such areas. In Canada, for example, the banking industry developed voluntary guidelines related to consumer and small business lending, partly to prevent the need for legislated solutions to perceived problems. Similarly, the banking industries in France and the U.K. also developed industry guidelines on issues such as consumer protection. Bank supervisors in Canada and the U.K. were not responsible for enforcing compliance with these guidelines and best practices, but the bank supervisor in France did have such responsibility. In addition, bank supervisors in the countries we studied were not expressly responsible for assessing compliance with other consumer protection laws, like those involving discrimination or antitrust; but they were responsible, in some countries, for advising their Justice Department equivalents of potential violations identified in carrying out their bank oversight duties. Officials in these countries suggested that concern and attention to various consumer issues were increasing, but they did not anticipate bank regulators would assume any new responsibilities in this area. The division of responsibilities among the four federal bank oversight agencies in the United States—FDIC, FRS, OCC, and OTS—is not based on specific areas of expertise, functions or activities, either of the regulator or the banks for which they are responsible, but based on institution type—bank or thrift, bank charter type—national or state, and whether banks are members of the FRS. Consequently, the four oversight agencies share responsibility for developing and implementing regulations, taking enforcement actions, and conducting examinations and off-site monitoring. Analysts, legislators, banking institution officials, and numerous past and present agency officials have identified weaknesses and strengths in this oversight structure. Some representatives of these groups have broadly characterized the federal system as redundant, inconsistent, and inefficient. Some banking institution officials have also raised concerns about negative effects of the structure on supervisory effectiveness. Some regulators, banking institutions, and analysts alike have asserted that the multiplicity of regulators has resulted in inconsistent treatment of banking institutions in examinations, enforcement actions, and regulatory decisions, despite interagency efforts at coordination. We have cited significant inconsistencies in examination policies and practices among FDIC, OCC, OTS, and FRS, including differences in examination scope, frequency, documentation, loan quality and loss reserve evaluations, bank and thrift rating systems, and examination guidance and regulations. At the same time, some agency and institution officials have credited the current structure with encouraging financial innovations and providing checks and balances to guard against arbitrary oversight decisions or actions. As a result of concerns about the current oversight structure, many proposals have been made to restructure the multiagency system of bank regulation and supervision. These proposals have not been implemented, partly as a result of assertions by FRS and FDIC officials that they rely on information obtained under their respective supervisory authorities to fulfill their nonoversight duties: monetary policy development and implementation, liquidity lending, and operation and oversight of the nation’s payment and clearance systems for FRS; administration of the deposit insurance funds, resolution of failing or failed banks, and disposition of failed bank assets for FDIC. As we have pointed out in the past, the extent to which FRS needs to be a formal supervisor of financial institutions to obtain the requisite knowledge and influence for carrying out its role is an important question that involves policy judgments that only Congress and the President can make. Nevertheless, past experience, as well as evidence from the five foreign oversight structures we studied (see below for further discussion) provides support for the need for FRS to obtain direct access to supervisory information. We have also favored a strong, independent deposit insurance function to protect the taxpayers’ interest in insuring more than $2.5 trillion in deposits. Nonetheless, previous work we have done suggests that a strong deposit insurance function can be ensured by providing FDIC with (1) the ability to go into any problem institution on its own, without having to obtain prior approval from another regulatory agency; (2) the capability to assess the quality of bank and thrift examinations, generally; and (3) backup enforcement authority. Treasury also has several responsibilities related to bank oversight, including being the final decisionmaker in approving an exception to FDIC’s least-cost rule. In addition, Treasury plays a major role in developing legislative and other policy initiatives with regard to financial institutions. Such responsibilities require that Treasury regularly obtain information about the financial and banking industries and, at certain times, institution-specific information. According to Treasury officials, Treasury’s current level of involvement, through its housing of OCC and OTS and their involvement on the FDIC Board of Directors, and the information it receives from the other agencies as needed, is sufficient for it to carry out these responsibilities. On the basis of the work we have done in areas such as bank supervision, enforcement, failure resolution, and innovative financial activities—such as derivatives—we have previously identified four fundamental principles that we believe Congress could use when considering the best approach for modernizing the current regulatory structure. We believe that the federal bank oversight structure should include: (1) clearly defined responsibility for consolidated and comprehensive oversight of entire banking organizations, with coordinated functional regulation and supervision of individual components; (2) independence from undue political pressure, balanced by appropriate accountability and adequate congressional oversight; (3) consistent rules, consistently applied for similar activities; and (4) enhanced efficiency and reduced regulatory burden, consistent with maintaining safety and soundness. In five recent reports, we reviewed the structure and operations of bank regulation and supervision activities in Canada, France, Germany, Japan, and the U.K. Each of the oversight structures of these five countries reflects a unique history, culture, and banking industry, and, as a result, no two of the five oversight structures are identical. Also, all of the countries we reviewed had more concentrated banking industries than does the United States, and all but Japan had authorized their banks to conduct broad securities and insurance activities in some manner. Nevertheless, certain aspects of these structures may be useful to consider in future efforts to modernize banking oversight in the United States, even though no structure as a whole likely would be appropriate to adopt in the United States. In the five countries we studied, banking organizations typically were subject to consolidated oversight, with an oversight entity being legally responsible and accountable for the entire banking organization, including its subsidiaries. If securities, insurance, or other nontraditional banking activities were permissible in bank subsidiaries, functional regulation of those subsidiaries was generally to be provided by the supervisory authority with the requisite expertise. Bank supervisors generally relied on those functional regulators for information but remained responsible for ascertaining the safety and soundness of the consolidated banking organization as a whole. The number of national bank oversight entities in the countries we studied was fewer than in the United States, ranging from one in the U.K. to three in France. Furthermore, in all five countries no more than two national agencies were ever significantly involved in any one major aspect of bank oversight, such as chartering, regulation, supervision, or enforcement. Commercial bank chartering, for example, was the direct responsibility of only one entity in each country. In those countries where two entities were involved in the same aspect of oversight, the division of oversight responsibilities generally was based on whichever entity had the required expertise. The central banks in the countries we studied generally had significant roles in supervisory and regulatory decisionmaking; that is, with the exception of the Canadian central bank, their staffs were directly involved in aspects of bank oversight, and all central banks had the ability to formally or informally influence bank behavior. In large part, central bank involvement was based on the premise that traditional central bank responsibilities for monetary policy, payment systems, liquidity lending, and crisis intervention are closely interrelated with oversight of commercial banks. While no two countries had identical oversight roles for their central banks, each country had an oversight structure that ensured that its central bank had access to information about, and certain influence over, the banking industry. In each of the five countries, the national government recognized that it had the ultimate responsibility to maintain public confidence and stability in the financial system. Thus, each of the bank oversight structures that we reviewed also provided the Ministry of Finance, or its equivalent, with some degree of influence over bank oversight and access to information. Although each country included its finance ministry in some capacity in its oversight structure, most also recognized the need to guard against undue political influence by incorporating checks and balances unique to each country. While central banks and finance ministries generally had substantial roles in bank oversight, deposit insurers, with the exception of the Canada Deposit Insurance Corporation, did not. Their less substantial oversight role may be attributable to the fact that national governments provided no explicit guarantees of deposit insurance and that deposit insurers were often industry-administered. Thus, in most of these countries, deposit insurers were viewed primarily as a source of funds to help resolve bank failures—either by covering insured deposits or by helping to finance acquisitions of failed or failing institutions by healthy institutions. Supervisory information was generally not shared with these deposit insurers, and resolution decisions for failed or failing banks were commonly made by the primary bank oversight entities. Most of the foreign structures with multiple oversight entities incorporated mechanisms and procedures that could ensure consistent and efficient oversight. As a result, banking institutions that were conducting the same lines of business were generally subject to a single set of rules, standards, or guidelines. Coordination mechanisms included having oversight committees or commissions with interlocking boards, shared staff, or mandates to share information. Some countries relied on the work of external auditors, at least in part, for purposes of efficiency. Bank oversight in these countries generally did not include consumer protection or social policy issues. There are many practical problems associated with creating a new agency or consolidating existing functions. Although such issues were beyond the scope of this report, it remains important that transition and implementation issues be thoroughly considered in deliberations about any modernization of bank oversight. GAO’s work on the five foreign oversight systems showed that there are a number of different ways to simplify bank oversight in the United States in accordance with the four principles of consolidated oversight, independence, consistency, and enhanced efficiency and reduced burden. GAO recognizes that only Congress can make the ultimate policy judgments in deciding whether, and how, to restructure the existing system. If Congress does decide to modernize the U.S. system, GAO recommends that Congress: Reduce the number of federal agencies with primary responsibilities for bank oversight. GAO believes that a logical step would be to consolidate OTS, OCC, and FDIC’s primary supervisory responsibilities into a new, independent federal banking agency or commission. Congress could provide for this new agency’s independence in a variety of ways, including making it organizationally independent like FDIC or FRS. This new independent agency, together with FRS, could be assigned responsibility for consolidated, comprehensive supervision of those banking organizations under its purview, with appropriate functional supervision of individual components. Continue to include both FRS and Treasury in bank oversight. To carry out its primary responsibilities effectively, FRS should have direct access to supervisory information as well as influence over supervisory decisionmaking and the banking industry. The foreign oversight structures GAO viewed showed that this could be accomplished by having FRS be either a direct or indirect participant in bank oversight. For example, FRS could maintain its current direct oversight responsibilities for state chartered member banks or be given new responsibility for some segment of the banking industry, such as the largest banking organizations. Alternatively, FRS could be represented on the board of directors of a new consolidated banking agency or on FDIC’s board of directors. Under this alternative, FRS’ staff could help support some of the examination or other activities of a consolidated banking agency to better ensure that FRS receives first hand information about, and access to, the banking industry. To carry out its mission effectively, Treasury also needs access to supervisory information about the condition of the banking industry as well as the safety and soundness of banking institutions that could affect the stability of the financial system. GAO’s reviews of foreign regulatory structures provided several examples of how Treasury might obtain access to such information, such as having Treasury represented on the board of the new banking agency or commission and perhaps on the board of FDIC as well. Continue to provide FDIC with the necessary authority to protect the deposit insurance funds. Under any restructuring, GAO believes FDIC should still have an explicit backup supervisory authority to enable it to effectively discharge its responsibility for protecting the deposit insurance funds. Such authority should require coordination with other responsible regulators, but should also allow FDIC to go into any problem institution on its own without the prior approval of any other regulatory agency. FDIC also needs backup enforcement power, access to bank examinations, and the capability to independently assess the quality of those examinations. Incorporate mechanisms to help ensure consistent oversight and reduce regulatory burden. Reducing the number of federal bank oversight agencies from the current four should help improve the consistency of oversight and reduce regulatory burden. Should Congress decide to continue having more than one primary federal bank regulator, GAO believes that Congress should incorporate mechanisms into the oversight system to enhance cooperation and coordination between the regulators and reduce regulatory burden. Although GAO does not recommend any particular action, such mechanisms—which could be adopted even if Congress decides not to restructure the existing system—could include expanding the current mandate of FFIEC to help ensure consistency in rulemaking for similar activities in addition to consistency in examinations; assigning specific rulemaking authority in statute to a single agency, as has been done in the past when Congress gave FRS statutory authority to issue rules for several consumer protection laws that are enforced by all of the bank regulators; requiring enhanced cooperation between examiners and banks’ external auditors; (While GAO strongly supports requirements for annual full-scope, on-site examinations for large banks, GAO believes that examiners could take better advantage of the work already being done by external auditors to better plan and target their examinations.) requiring enhanced off-site monitoring to better plan and target examinations as well as to identify and raise supervisory concerns at an earlier stage. FRS, FDIC, OCC, and OTS provided written comments on a draft of this report, which are described below and reprinted in appendixes IV through VII. Treasury also reviewed a draft and provided oral technical comments, which we incorporated where appropriate. FRS agreed that it is useful to consider the experience of other countries in making policy determinations. It also agreed that there are different ways to accommodate the policy goal of modernizing the U.S. supervisory structure. FRS reiterated its opinion that the purpose of bank supervision is to enhance the capability of the banking system to contribute to long-term national economic growth and stability. FRS agreed with our description of the direct involvement of central bank staff in bank oversight in the countries we studied and our recommendation that FRS continue to be included in bank oversight. However, it felt that we should be more specific in stating that FRS needs “active supervisory involvement in the largest U.S. banking organizations and a cross-section of other banking institutions” to carry out its key central banking functions. To clarify what was meant by this statement, a senior FRS official advised us that FRS’ present regulatory authority gives it the access and influence it needs. But if the regulatory structure were changed so that there is only one federal regulator for each banking organization—holding company and all bank subsidiaries—then FRS feels that it would have to be the regulator for the largest banking organizations and a cross-section of others in order to carry out its key central banking functions. We agree that FRS needs to have direct access to supervisory information as well as the ability to influence supervisory decisionmaking and the banking industry if the oversight structure is changed. However, in our studies of foreign oversight structures we found that direct central bank involvement in bank oversight, and access to and influence over the banking industry, could be accomplished in several ways. These could include giving the central bank a formal role as bank supervisor, participating on oversight boards with staff involvement in examination and other areas of supervision, and serving in informal yet influential roles that included participation in oversight by central bank staff. FRS also noted that 88 percent of U.S. banks are part of banking organizations that are actively supervised by no more than two oversight agencies. The portion of activities supervised by the third or fourth agency in holding companies where more than two agencies are involved in oversight is generally small. We acknowledge that most U.S. banks are supervised by no more than two federal banking supervisory agencies. Nevertheless, as the table provided by FRS shows (see app. IV), more than 50 percent of bank assets are held in companies that are supervised by three or four of these agencies. Furthermore, it is the larger, more complex banking institutions—whose failure could pose the greatest danger to the financial system—that are likely to be subject to oversight by more than two agencies, with the potential attendant oversight problems described in our report. In addition, the percentage of assets supervised by additional agencies—which may be relatively small—does not indicate their importance or potential risk to the banking organization. FDIC provided four fundamental principles for an effective bank regulatory structure, which are generally consistent with the principles and recommendations that we advocate. These principles include providing FDIC with an explicit backup supervisory authority, backup enforcement power, and the capability to assess the quality of bank and thrift examinations. We also support providing FDIC with such backup authority. FDIC also noted that the broader regulatory responsibilities related to the role of the deposit insurer require current and sufficient information on the ongoing health and operations of financial institutions. In FDIC’s judgment, periodic on-site examination remains one of the essential tools by which such information may be obtained. FDIC commented on the mechanisms we described that Congress might consider to enhance regulators’ cooperation and coordination and reduce regulatory burden, noting that the current processes for coordinating regulation allow for the consideration of the unique regulatory perspectives of each agency. We agree that the present practice of cooperation, coordination, and communication among the agencies in rulemaking allows the unique viewpoints of each of the oversight agencies to be considered. The assignment of rulemaking authority to a single agency would not preclude incorporating other viewpoints, as evidenced by the current rulemaking process with regard to some consumer protection regulations, where a single agency has been assigned such authority. We believe assigning rulemaking authority for safety and soundness regulations could be one way to attain a more efficient regulatory process. OCC described our report as comprehensive and conveying more about the foreign regulatory structures than has been available to the public, albeit not exhaustive. OCC agreed with us that the foreign structures are not readily adaptable to the United States and described some of its observations about the differences among the five countries’ regulatory structures. Consequently, OCC suggested that Congress consider our suggestions very carefully in making any changes to the oversight structure in the United States. We agree that Congress should be cautious in any consideration it gives to changing the regulatory structure. OTS generally concurred with our principal recommendations and restated its position that consolidation will make the bank oversight system more efficient and effective. It added that reducing the number of federal oversight agencies should be done in a way that preserves a strong and stable regulatory environment and protects agency employees. We agree that the consolidation of any oversight agencies should be done in a way that preserves a strong and stable regulatory environment that is effective, efficient, and responsive to the needs and risks of the supervised institutions. FRS, FDIC, and OTS also noted several regulatory actions and other initiatives underway that are designed to improve coordination—including joint or coordinated examinations—and reduce regulatory and supervisory redundancy and overlap. We believe such efforts are important to the consistency and efficiency of the regulatory structure and have incorporated this information into our report where appropriate. The comment letters from FRS, FDIC, and OTS attest to the unique perspectives of each of the oversight agencies, which we believe provide valuable insights to Congress. As we describe in our report, there is a range of ways to address our recommendations and to capture these perspectives in any congressional consideration of changing the current U.S. bank oversight structure. Therefore, we have incorporated the agencies’ insights in the report where appropriate. In addition, we have included descriptions of the interagency efforts discussed in the agencies’ responses to improve coordination and cooperation and reduce regulatory burden. | Pursuant to a congressional request, GAO reviewed its previous work on the structure and operations of bank oversight in five countries, focusing on: (1) aspects of those systems that may be useful for Congress to consider in any future modernization efforts; (2) perceived problems with federal bank oversight in the United States; and (3) principles for modernizing the U.S. federal bank oversight structure. GAO found that: (1) the five foreign banking systems reviewed had less complex and more streamlined oversight structures than the United States; (2) in all five countries, fewer national agencies were involved with bank regulation and supervision than in the United States; (3) in all but one of these countries, both the central bank and the ministry of finance had some role in bank oversight, and several of these countries relied on the work of the banks' external auditors to perform certain oversight functions; (4) in all cases, there was one entity that was clearly responsible and accountable for consolidated oversight of banking organizations as a whole; (5) the bank oversight structure in the United States is relatively complex, with four different federal agencies having the same basic oversight responsibilities for those banks under their respective purview; (6) industry representatives and expert observers have contended that multiple examinations and reporting requirements resulting from the shared oversight responsibilities of four different regulators contribute to banks' regulatory burden, and that the federal oversight structure is inherently inefficient; (7) having one agency responsible for examining all U.S. bank holding companies, with a different agency or agencies responsible for examining the holding companies' principal banks, could result in overlap and a lack of clear responsibility and accountability for consolidated oversight of U.S. banking operations; and (8) any modernized banking structure should provide for clearly defined responsibility and accountability for consolidated and comprehensive oversight, independence from undue political pressure, consistent rules, consistently applied for similar activities, and enhanced efficiency and reduced regulatory burden. |
Civil settlements are one of several enforcement tools used by some federal agencies to help ensure that individuals and companies comply with the laws and regulations they enforce. For purposes of this report, civil settlements involve negotiations by federal agencies with companies to resolve issues about their compliance with laws and regulations. The negotiation process can involve discussions between agency officials and a company about each party’s proposals to address the compliance problem and can end with a written agreement that reflects the terms reached by the settling parties. In such cases, the civil settlements generally require a company to agree to perform certain activities or stop engaging in certain activities. Some settlements also require that monetary payments be made to the government and to others. When determining settlement amounts, agencies consider various factors, including thresholds for fines and penalties set by federal statutes for violations and the severity of the violation. While some agencies have administrative authority to enter into civil settlements, some cases are required to be referred to DOJ for resolution. For these cases, DOJ may settle with the defendant or take the defendant to court. Of the four agencies we contacted, DOJ is responsible for certain environmental settlements on behalf of EPA and certain civil health care fraud cases on behalf of HHS. Section 162 of the IRC provides a deduction for all ordinary and necessary business expenses, including settlements and similar payments. This provision is subject to an exception in IRC § 162(f) that denies a deduction for any fine or similar penalty paid to the government for the violation of any law. The definition of “fine or similar” penalty includes an amount paid in the settlement of the taxpayer’s actual or potential liability for a fine or penalty (civil or criminal). Furthermore, Treasury regulations provide that payments made as compensatory damages paid to a government do not constitute a fine or penalty. In general, IRS views punitive payments as being nondeductible and compensatory payments as being deductible. Although the terms used to describe a payment required as part of a civil settlement may provide an indication of whether the amount is deductible or not, according to IRS, often it is necessary to look to the intent of the law requiring the payment or the facts and circumstances of the settlement to determine whether a payment is deductible. Civil settlement agreements we reviewed use terms other than “compensatory” or “punitive” to describe settlement payments. For instance, some agencies use terms like restitution or disgorgement for payments that are intended to compensate the government or others. Even when a term used to describe a payment may seem to indicate that a payment is not deductible, in fact, the opposite may be the case. For example, a payment labeled as a civil penalty and that seems not deductible may be deductible if it is imposed as a remedial measure to compensate the government or other party. Or, payments that will be used for remedial or compensatory purposes and seem deductible may not be so if the law requiring the payment indicates the payment is to have a punitive or deterrent effect. IRS and courts look to the purpose of the statute, including the legislative history and administrative and judicial interpretation, to determine whether a payment serves a punitive or compensatory purpose. If the law is unclear, or if the statute serves both punitive and compensatory purposes, the facts and circumstances of the specific settlement payment, including the terms of the settlement agreement, often need to be examined to determine the purpose the parties intended the payment to serve. Until recently, IRS did not have a tax form that could be used to identify whether a fine or penalty had been deducted for tax purposes. Effective for any tax year ending on or after December 31, 2004, corporations with consolidated assets of $10 million or more that are required to file IRS Form 1120, the corporate income tax return, must also file Schedule M-3. Schedule M-3 requires companies to reconcile financial accounting net income (or loss) with taxable net income and expense and deduction items. The 2004 Schedule M-3 line items for reconciliation include fines, penalties, and punitive damages. In fiscal years 2001 and 2002, EPA, SEC, HHS, and DOJ negotiated some of the largest civil settlements in the federal government. The civil settlements we examined ranged in size from about $870 thousand to over $1 billion. (See table 1.) For example, a 2001 EPA judicial settlement related to the Clean Air Act required a utility company to significantly reduce harmful air pollution from its power plants at an estimated cost of over $1 billion and pay a $3.5 million fine. The cumulative value for the 20 largest settlements for fiscal year 2001 and the 20 largest settlements for fiscal year 2002 at the four agencies—a total of 160 settlements—exceeded $9 billion. Officials in the four agencies said that they do not take tax consequences into account during negotiations with settling parties, that is, they do not negotiate with companies about the deductibility of settlement amounts. They said they generally do not have tax expertise and that determining deductibility of settlement amounts is IRS’s role. When negotiating, officials said they look to the relevant laws and regulations and the facts and circumstances of the case, including the severity of the violation and the strength of the evidence against the violator to determine the settlement amount to seek. In preparing for negotiations, two agencies— EPA and DOJ—consider certain tax issues in calculating the amounts they propose to seek in negotiating environmental settlements. This calculation estimates a company’s financial gain from not complying with the law, that is, their economic benefit. The agencies factor in whether the company would have incurred tax deductible expenses to stay in compliance and apply the violator’s year-specific combined state and federal marginal tax rates to the costs of complying on time and complying late. Except for some settlement agreements stating that civil penalties are not deductible, the agencies’ written civil settlement agreements we reviewed generally did not specify the deductibility of settlement amounts. As an exception to this general practice, we found that some DOJ environmental settlements with civil penalties included language indicating that the penalties would not be deducted for federal income tax purposes. DOJ Environmental and Natural Resources (ENR) Division officials explained that when a settlement agreement includes civil penalties, their attorneys have discretion about whether to include such language in an agreement. The officials emphasized that the law is generally clear that civil penalties paid to a government are not deductible and stating so in the agreement is essentially restating the law and is not necessary. In addition, in 2003, subsequent to the time frame of the settlements we reviewed, SEC adopted a policy of requiring settlement agreements with civil penalties to include language stating that the settling parties would not deduct civil penalties for tax purposes. Table 2 describes the four agencies’ practices regarding how they consider tax issues during their settlement negotiation processes, including drafting the terms of their settlement agreements. The settlement agreements we reviewed were consistent with the practices described to us by the agencies’ officials. These practices are current as of June 2005. Because each settlement agreement is unique, settlements negotiated by these agencies can have some exceptions to the practices listed in the table. As table 2 shows, the selected agencies do not negotiate with companies about whether they can deduct any portion of their settlement from their income taxes. In determining their negotiating position and any changes to agree to during negotiations, officials generally look to factors such as the relevant laws and regulations and the facts and circumstances of the case, including the severity of the violation and the strength of evidence against the violator. Officials in the four agencies said that determining deductibility is IRS’s role, and they generally do not have the expertise to address the deductibility of payments during negotiations or to specify the tax consequences of amounts in the settlements. IRS staff agreed and said that if agencies were to specify whether a settlement amount is deductible, there could be a risk that the agencies might concede tax consequences in order to reach a settlement. The following information summarizes the policies, procedures, and views of the agencies on taking taxes into account during negotiations and specifying the tax deductibility of settlement payments in the agreements. EPA’s mission is to protect the environment and address related human health impacts. EPA can reach civil administrative and judicial enforcement settlements against violators of environmental laws, and its priorities in negotiating settlements are to ensure that violators come into compliance with the law, punish past violations and deter future violations, obtain restoration of environmental damage resulting from violations, and impose civil penalties sufficient to recover any economic benefit gained as a result of the violator’s noncompliance and deter future violations. EPA negotiated the civil administrative settlements under its own authority without a judicial process. Cases that are brought and settled by DOJ on behalf of EPA are referred to as civil judicial enforcement settlements. DOJ’s policies, procedures, and officials’ views for these cases are discussed in the DOJ section of this report. All EPA civil settlements we reviewed included payments labeled as civil penalties for violations of environmental laws or regulations. In addition, the value of the settlements sometimes included estimated amounts a company may incur to achieve and maintain compliance with the environmental laws and regulations, such as installing a new pollution control device to reduce air pollution or prevent emissions of a pollutant. Also, some settlements included SEPs, which are projects a company agrees to undertake in addition to complying actions. IRS is currently reviewing the deductibility of SEPs. Civil penalties in EPA settlements are generally composed of two parts: economic benefit and gravity. Economic benefit represents the financial gains that a violator accrues by delaying expenditures necessary to comply with environmental regulations, avoiding them, or both. Under EPA’s civil penalty policy, the goal of recovering the economic benefit of noncompliance is to place the violator in the same position as if compliance had been achieved from the start. The amount EPA includes in a civil penalty to account for the seriousness of the violation is referred to as the gravity portion of the penalty. EPA includes the gravity portion of the penalty to provide deterrence against future noncompliance. When calculating the gravity portion of the initial civil penalty amount, EPA adjusts the gravity-based penalty on various case-specific factors, including the strength of evidence against the company and the company’s degree of cooperation and history of noncompliance. When calculating the economic benefit portion of civil penalties, EPA uses an economic computer model to estimate any financial advantage a company gained from not complying with environmental laws. EPA’s economic computer model takes into account whether a company would have incurred tax deductible costs if it had complied with the law, such as a one-time nondepreciable expenditure, in estimating the economic benefit a company gained by not complying with environmental laws or regulations. The computer model applies the appropriate year-specific combined state and federal marginal tax rates of the violator in calculating economic benefit along with standard financial cash flow and net present value analysis techniques to calculate the costs of complying on time and of complying late. When calculating the gravity portion of civil penalties, EPA officials consider the facts surrounding each violation, including factors such as the actual or possible harm caused by the violation, the size of the violation, and the goals of the specific environmental program. EPA officials acknowledged that they negotiate with violators about the size of the gravity portion of the penalty, but said in doing so they consider factors such as the strength of their position and not whether the violator may be able to claim a tax deduction. When EPA settlements include civil penalty payments, EPA’s practice is to explicitly label these payments as civil penalties. In some settlements with civil penalties, the settlement agreements also reference IRC § 162(f), which states that penalties payable to a government are nondeductible. Officials noted that including language referencing IRC § 162(f) is not EPA’s usual practice. EPA officials said that they believe the law is clear that civil penalties payable to a government are generally nondeductible, so they do not see inclusion of such language in settlement agreements as necessary. As part of some settlements, companies perform SEPs, which are projects not required by law, that are voluntarily undertaken by a respondent in exchange for possible penalty mitigation. EPA may mitigate the civil penalty ultimately assessed as part of the settlement, when a respondent agrees to undertake a SEP. EPA still collects a civil penalty as part of the settlement in accordance with its 1998 SEP policy, which calls for collecting the greater of 25 percent of the gravity component of the penalty, or 10 percent of the gravity, plus economic benefit. To determine the value of SEPs, EPA uses an economic computer model, and if a company tells EPA that it plans to deduct the SEP costs, EPA factors the company’s decision into valuing the SEP through the model. EPA officials said that they are not involved in a violator’s decision to deduct the SEP costs and that they take the violator’s decision at face value. SEC is responsible for administering and enforcing federal securities laws and regulations and fostering fair and efficient markets for the trading of securities. SEC enforcement officials told us that in enforcing the securities laws, they aim to protect investors and punish violators. In performing its enforcement role, SEC may, among other actions, negotiate civil settlements with those who violate securities laws. When appropriate, SEC provides that violators make monetary payments that generally include amounts for civil penalties and disgorgement. The SEC settlement agreements we reviewed included penalties for violations of the securities laws. These settlements also included disgorgement, in which SEC attempts to ensure that violators of securities laws or regulations do not profit from their illegal activity, and when appropriate, these disgorged profits are returned to investors. The IRC does not specifically address the deductibility of disgorgement. Although IRS looks at the individual facts and circumstances of a case to determine deductibility, it has generally regarded disgorgement payments as compensatory, and therefore tax deductible. As previously discussed, Treasury regulations provide that in civil actions, compensatory damages paid to a government do not constitute a fine or a penalty. SEC’s Chief Counsel for Enforcement emphasized that SEC’s decision on how much of a settlement payment is penalty versus disgorgement is based solely on the facts and circumstances of the case, including the law violated, the degree of harm, and the seriousness of the violation. However, the official further said that although SEC does not negotiate with settling parties about the deductibility of settlement payments, settling parties may initiate negotiations with SEC about how the settlement payment is to be allocated between penalty and disgorgement. Although settling parties may seek a larger disgorgement amount because it is generally tax deductible, SEC staff make recommendations for disgorgement and penalties based on their analysis. In 2003, SEC implemented a policy requiring all civil settlement agreements with penalties to include language that expressly prohibits the settling party from taking a tax deduction or seeking to recover from an insurance carrier the penalty portions of the settlement payment. SEC adopted standardized language prohibiting deductions as a result of the Global Research settlement, in which 10 Wall Street companies settled for a combined $875 million in civil penalties and disgorgement. There were reports that some of the settling companies were planning to take deductions for the civil penalty portion of the settlement payments that would be placed into funds for investors who were harmed by the companies’ violations. The Sarbanes-Oxley Act of 2002 allows SEC, in appropriate cases, to add penalties to the disgorgement fund for the benefit of harmed investors, pursuant to the “fair fund” provisions of the act. SEC provides in its standardized settlement language that such amounts are to be treated as penalties for tax purposes. SEC’s settlement agreements are silent on the tax deductibility of disgorgement. Senior SEC officials noted that in their view, decisions about the deductibility of disgorgement should be left to IRS. HHS is the principal federal agency responsible for protecting the health of American citizens and providing essential human services. HHS’s largest civil settlements are generally FCA cases relating to civil health care fraud. FCA generally provides that anyone who knowingly submits false claims to the government is liable for damages up to three times the amount of the damages sustained by the government plus penalties from $5,500 to $11,000 for each false claim submitted. Although many FCA cases involve civil health care fraud against the Medicare and Medicaid programs that HHS administers, the act is also used in settling other types of fraud perpetrated against the federal government, such as defense contractor fraud. A civil health care FCA case, for example, could involve a health care provider who grossly overcharged for medical services rendered and then filed claims for reimbursement at the overcharged rates. Usually, civil health care fraud cases are based on referrals from federal and state investigative agencies and private persons. DOJ is responsible for representing the United States in FCA cases and therefore negotiates the FCA settlements. DOJ’s Civil Division carries out those responsibilities along with U.S. Attorneys’ Offices located across the country. Accordingly, DOJ sets the overall policy for civil health care fraud FCA settlements. For health care settlements, HHS’s Office of Inspector General (OIG) provides DOJ assistance in several ways, including investigating individuals and companies that may have abused the HHS health care programs, and sometimes works with DOJ to determine the amount of single damages, that is, the amount of loss sustained by the government due to the violator’s actions. DOJ negotiates settlement agreements on behalf of other federal agencies, including some cases involving HHS and EPA. The DOJ settlement agreements we reviewed were limited to FCA settlements negotiated by DOJ’s Civil Division and judicial environmental settlements negotiated by DOJ’s ENR Division. The FCA cases negotiated by DOJ that we reviewed contained a single payment labeled as a settlement amount, which does not characterize the extent to which payments are for single or multiple damages or civil penalties. All of the DOJ-led environmental settlement agreements that we reviewed included amounts labeled as penalties and some included SEPs. In negotiating FCA civil settlement agreements, DOJ Civil Division officials said that they do not consider or discuss any aspects of taxes. In calculating the settlement amount for FCA cases, DOJ first assesses the amount of damages the violation cost the government and seeks to recover the full amount. It also considers the severity of the violation in determining whether the settling company should pay a multiple of the assessed damages and civil penalties. DOJ Civil Division officials stated that they do not include language on the deductibility of payments in their written FCA settlement agreements. In fact, according to the officials, all FCA settlements contain DOJ’s standard settlement agreement language, which states that nothing in the agreement characterizes the payments for federal income tax purposes. DOJ Civil Division officials said that this language supports the agency’s policy of not addressing the tax treatment of settlement payments in settlements agreements. DOJ Civil Division and IRS officials told us that the agencies came to a mutual agreement that DOJ’s tax-neutral practices on the deductibility of civil settlement payments are appropriate. Furthermore, officials added that the settlement agreements refer to the payments as a settlement amount because the negotiations with the settling party usually involved agreeing on a lump sum amount without characterizing the payment into categories such as single, double, or treble damages and civil penalties. Officials said they do not categorize the payments more specifically because doing so would add complexity to the negotiation process by adding additional factors on which to obtain agreement between the parties. Thus, the agreement does not characterize the extent to which the settlement payment is punitive or compensatory. According to IRS staff, single damages are generally considered compensatory and therefore tax deductible, and any multiple damages and civil penalties are generally considered punitive and therefore nondeductible. Officials in DOJ’s Civil Division and HHS’s OIG said that even though FCA allows for the assessment of penalties in addition to multiple damages, penalties are not always sought. The HHS officials said that penalties are not generally sought in FCA settlements because collecting a multiplier of damages is sufficient to compensate the government and provide a deterrence. DOJ also negotiates environmental cases on behalf of EPA. EPA refers to cases it sends to DOJ to settle as judicial cases since they are not resolved under EPA’s administrative authority. EPA staff assist DOJ staff in building these cases and EPA’s civil penalty policies generally apply to DOJ environmental settlements. However, DOJ—not EPA—has primary settlement authority for these cases, and DOJ is not bound by EPA’s penalty policies. Like EPA, in preparing for negotiations and determining the amount to seek at settlement, DOJ considers aspects of taxes in calculating the economic benefit a violator received from not complying with environmental laws. However, DOJ ENR Division officials told us that their position is to be neutral on tax issues. DOJ sometimes uses the EPA economic benefit computer model to calculate economic benefit amounts but may also obtain outside experts. Similar to EPA’s administrative settlements, some DOJ-negotiated environmental settlements may involve SEPs, which can be used to offset a portion of the civil penalty that DOJ would otherwise seek. The officials reiterated that they do not negotiate with the violator about the deductibility of the SEP costs, but would factor in the violator’s stated intentions about deducting the SEP costs in establishing its value as part of the settlement. As with EPA civil administrative settlements, when DOJ-negotiated environmental settlements include civil penalties, the practice is to explicitly label these payments as civil penalties. Also, in some settlements with civil penalties, DOJ-negotiated environmental settlement agreements reference IRC § 162(f), which states that fines or similar penalties payable to a government are nondeductible. DOJ ENR Division officials said that having settlement agreements reference IRC § 162(f) is not standard practice and would be at the discretion of officials involved in the settlement negotiations. According to these officials, the law is generally clear that civil penalties payable to the government are nondeductible and stating so in agreements is merely restating the law. The officials said they do not negotiate with the settling companies about whether the amounts are deductible. We observed that one large settlement agreement negotiated by DOJ’s ENR Division contained language stating that the settling company was not allowed to take a deduction for funding of remediation work and that its chief financial officer must submit a certification that deductions were not taken. DOJ’s ENR Division officials told us that a case such as this one likely involved particular negotiating circumstances and strategies. They emphasized that this was an exception rather than their usual practice of not specifying the tax treatment of settlement amounts in the settlement agreement. In responding to our survey, companies that paid some of the largest civil settlement payments at the four agencies we reviewed generally reported that they deducted civil settlement payments when the settlement agreements did not label the payments as penalties. Conversely, when the settlement agreements labeled the payments as penalties, the companies generally reported that they did not deduct the payments. Overall, for 20 of the 34 settlements for which we received survey responses, companies stated that they deducted some or all of their civil settlement payments. The total value of settlement amounts of the 34 settlements for which we received responses was over $1 billion. Table 3 summarizes the overall responses from the companies, and table 4 provides survey results on deductions categorized according to how the settlement agreements labeled the settlement payments. As shown in table 4, for 15 of the 16 DOJ FCA settlements, companies reported deducting their payments. Of these 15 settlements, 12 survey responses showed that companies deducted the full amount of the payment, while 3 responses showed they deducted a percentage of the full amount—ranging from 43 to 89 percent. Consistent with DOJ’s usual practice for FCA civil settlements, these FCA settlement agreements referred to the settlement payment as the settlement amount, which does not characterize whether the settlement amount included a penalty or was punitive or compensatory in nature. In addition, of the 15 settlements for which companies settled DOJ FCA cases and deducted payments, companies in 7 settlements told us that they deducted payments because, in their view, the settlement amounts were restitution or compensatory in nature. However, minutes of a healthcare fraud settlements meeting between IRS and DOJ show that IRS believes FCA settlement payments usually include a punitive portion to punish violators and to deter future violations. Also, according to DOJ’s technical comments on the draft of this report, in most FCA settlements (apart from those that recover strictly penalties), some of the amounts paid are in the nature of compensatory reimbursement and may be deductible. Five companies we surveyed reported that a sentence in their FCA settlement agreements indicating that the settlement was not punitive in purpose or effect was a basis for them taking deductions. The settlement amounts deducted by these five companies totaled over $100 million. According to a director in DOJ’s Civil Division, DOJ does not intend for the language in FCA settlement agreements that the companies mentioned to refer to tax treatment. The DOJ official said that this sentence is not intended to imply that the settlement amounts are compensatory for tax purposes, but rather to ensure that the amounts are not punitive for double jeopardy purposes or prohibitions on excessive fines. The DOJ official added that a subsequent statement that is standard in all FCA settlement agreements articulates DOJ’s position on deductibility, that is, that the agreement does not characterize the payment for federal income tax purposes. Based on our discussions with DOJ and our survey evidence showing that some companies cited this sentence in support of their tax deductions, DOJ revised the relevant portions of the FCA settlement agreement model language. Effective June 2005, the new language removes references to the settlement not being punitive in purpose or effect. Furthermore, three companies that deducted FCA settlement payments reported that they did so in whole or in part because their settlement agreements contained language stating that the company denied wrongdoing. Their deductions totaled about $15.5 million. Two of these three companies also cited the sentence discussed in the prior paragraph as another reason for deducting the amounts. Also, as shown in table 4, three other companies reported deducting settlement payments even though they were labeled as civil penalties. Two of these companies reported that our survey made them aware that their deductions were improperly taken, and they plan to file amended tax returns. These deductions totaled about $1.9 million. The other company reported that it deducted the civil penalty because it was paid to a self- regulatory organization, which the company believed was not a government agency. This settlement agreement contained language indicating that the self-regulatory organization settled with the company on behalf of a federal agency. Ten companies that responded to our survey had environmental settlement agreements negotiated by DOJ that contained SEPs. Our analysis of the settlement agreements for the 10 companies showed that four agreements contained language stating that the SEP costs are not deductible. Two companies with settlements that contained this language reported to us that they did not deduct the costs, and the other 2 companies did not respond to the survey question. Of the 6 companies with SEPs for which the settlement agreements did not state whether the costs were deductible, 2 companies reported deducting the SEP costs and the other 4 companies did not indicate whether they deducted SEP costs. Some of the companies that reported not deducting any settlement payments gave us varying reasons for not taking deductions. The reasons included references to IRC § 162(f), which states, in part, that penalties paid to a government are not deductible, and provisions in their settlement agreements specifying that they would not deduct the settlement payments. The four federal agencies do not systematically provide IRS with civil settlement information that would be useful to IRS for compliance purposes, although the agencies do provide such information on a case-by- case basis at IRS’s request, such as for audits of companies with settlement agreements. The agencies told us they were willing to work with IRS to develop a permanent system for routinely providing appropriate information. DOJ Civil Division and EPA have established means for providing IRS with information on civil settlement agreements as part of IRS’s temporary compliance research projects. In 2004, IRS introduced Schedule M-3, which could potentially help IRS identify corporations with some settlements because it captures information on fines, penalties, and punitive damages from companies with assets of $10 million or more. In general, the four federal agencies do not routinely notify IRS when a civil settlement has been reached or provide other settlement-related information that IRS would find useful, although they provide IRS with settlement information on a case-by-case basis. To identify settlements that have been reached, IRS officials search agency Web sites and press releases. DOJ ENR Division, EPA, and SEC officials said that their Web sites generally post most of their civil settlement agreements. IRS usually contacts the agencies on a case-by-case basis to obtain information to use during audits in assessing whether companies properly treated their settlement payments on their income tax returns. For example, to determine the facts and circumstances of a settlement, IRS contacts DOJ officials to obtain information on FCA settlements, including written exchanges between the agency and the company and the tracking forms that are used by DOJ to allocate settlement amounts to various government accounts. According to IRS staff, the tracking form and the other information it obtains from DOJ about a settlement can provide leads for determining nondeductible punitive damages in FCA cases. The agencies have expressed willingness to notify IRS when a settlement has been reached and to work with IRS on providing other appropriate information. Some steps in this direction have already been taken. For example, EPA has designated staff to work with IRS to provide specific settlement information. IRS officials said that it would help IRS’s compliance efforts if agencies systematically notified IRS that a settlement has been reached and provided additional information, such as their intent regarding the breakdown of the settlement payment by category (i.e., punitive versus compensatory). According to an IRS Director in the Large and Mid-Size Business Division, such information could play a role in determining which firms to audit and, when an audit occurs, whether a settlement should be covered. Further, the IRS Director said that in some cases IRS would like to offer pre-filing agreements to settling companies, which would resolve the tax treatment of settlement payments before tax returns are filed. The Director focused on large settlements for which IRS enforcement action was more likely than on smaller settlements. IRS is collecting information on certain settlements through two compliance projects. IRS uses compliance projects to collect information and conduct research in order to target audits in particular issue areas. It intends to use the project results on the degree to which companies incorrectly deduct civil settlement payments to make data-driven business decisions on how to correct the noncompliance. In 2003, IRS initiated a fraud settlements compliance project focusing on the deductibility of payments made in the settlements involving fraud, primarily FCA settlements. The fraud settlements compliance project targets multimillion-dollar settlements where at least part of the settlement payment may be punitive although the agreements may not specify punitive damages. During February 2005 discussions between IRS and DOJ, DOJ officials agreed to notify IRS promptly of FCA settlements they reach of $10 million and more and provide a list of smaller dollar FCA settlement agreements annually for the duration of the project. DOJ officials told us they would be willing to continue providing IRS with this information after the completion of this compliance project. IRS officials said that this information would be useful to them in targeting and conducting audits. According to the compliance project description, IRS staff have found that for settlements involving Medicare fraud, companies are claiming deductions for the full amount of the settlement. However, IRS staff told us that these settlement payments generally contain a punitive portion. This compliance project is scheduled to be completed in 2006. In 2004, IRS initiated an environmental settlements compliance project, which focuses on four components of environmental settlements that may result in an income tax issue—civil penalties; SEP costs; complying actions; and other payments and requirements, which may include punitive sanctions. For the project, IRS says it needs access to negotiating files, court documents, settlement documents, databases, personnel, and attorneys at the relevant settling agencies. EPA has agreed to provide IRS with certain case-specific information. To obtain an initial sample of approximately 30 recently negotiated significant environmental settlements, IRS staff searched agency press releases and Web sites and contacted EPA and DOJ staff for settlement information on a case-by-case basis. The initial review of this sample suggests that companies may be noncompliant when deducting, capitalizing, amortizing, or depreciating SEP costs. The compliance initiative description also said that some IRS staff have questioned the appropriateness of deducting SEP costs if SEP costs are payments in lieu of a penalty because it appears that such costs are not deductible under IRC § 162(f). IRS officials said that IRS’s National Office plans to issue a technical advice memorandum (TAM) that will address SEP deductibility and capitalization issues. The compliance project staff told us that this compliance project is scheduled to be completed in late 2005, although it may be extended. According to IRS’s fraud settlements compliance project description, the compliance projects also provide IRS with the necessary information to evaluate the potential for negotiating pre-filing agreements with settling companies. Under pre-filing agreements, IRS and companies resolve whether all or a portion of a settlement payment can be deducted before the companies file their tax returns. The project description says that for those cases for which a pre-filing agreement is not executed, IRS examiners can more timely develop the facts and reach a position on deductibility, which can reduce examination time on this issue while enhancing IRS compliance results. IRS officials told us they are in discussions with one company that reached a civil settlement regarding a pre-filing agreement and are offering pre-filing agreements to other settling companies. IRS has a new source of information that could help it identify companies with settlements. In 2004, IRS introduced Schedule M-3, which is designed to reconcile differences in financial accounting and taxable income (or loss). The schedule is being used by corporations with assets of $10 million or more and is to be phased in for use by other corporations in 2005 and 2006. Because Schedule M-3 collects information on fines, penalties, and punitive damages, it may help IRS identify settlements that should be considered if a company is audited. Schedule M-3 as currently designed may not capture settlement payments that were not labeled as fines, penalties, or punitive damages in the written settlement agreement. Based on our discussions, IRS officials responsible for Schedule M-3 said that they were considering options to address this situation. When settlement agreements specify civil penalties, the law is generally clear that they are nondeductible. However, when the settlements do not contain penalties, deductibility may be less clear because the IRC and the statutes imposing the payments may be silent regarding whether the payments are punitive or compensatory in nature. Moreover, many settlement agreements do not contain language addressing the tax deductibility of settlement payments. To determine the deductibility of settlement payments during audits or in reaching pre-filing agreements, IRS examines settlement information that would provide the relevant facts and circumstances in a particular case. Given this situation, one way to help IRS better ensure that companies are properly treating settlement payments for tax purposes is to have agencies systematically notify IRS when they have reached a settlement that requires significant dollar payments and provide information that IRS may find useful. With such information, IRS can better determine which companies to examine and whether settlement payments should be part of the examination. In addition, with a regular flow of information on settlements as they are reached, IRS would be able to contact companies when appropriate to obtain pre-filing agreements on how the settlement payments should be treated on their tax returns. This may be especially useful in cases such as the DOJ FCA settlement agreements, which may not contain useful information for the settling company and IRS to determine the tax treatment of the settlement amounts. We recommend that the Commissioner of Internal Revenue direct the appropriate officials to work with federal agencies that reach large civil settlements to develop a cost effective permanent mechanism to notify IRS when such settlements have been completed and to provide IRS with other settlement information that it deems useful in ensuring the proper tax treatment of settlement payments. We sent a draft of this report to IRS, EPA, SEC, HHS, and DOJ for comment. We received written comments from IRS, EPA, SEC, and HHS. DOJ provided written technical comments. In his August 26, 2005, letter, the Commissioner of Internal Revenue (see app. III) said that he agreed with our recommendation and said that it would be beneficial for IRS to work with federal agencies to develop a systematic method for obtaining information on civil settlements contemporaneous with those settlements. He said that IRS will form an executive led team to work with each agency with significant civil settlements to reach agreement on what information will be provided, the format of the information, and the frequency of delivery. IRS also provided technical comments which we incorporated in our report. EPA’s Assistant Administrator, Office of Enforcement and Compliance Assurance stated in an August 26, 2005, letter (see app. IV) that EPA generally supports our recommendation and believes that EPA already has mechanisms to provide IRS with settlement information useful in determining the proper tax treatment of settlement amounts. The Assistant Administrator said that EPA’s publicly available Web site contains 3 years of information on concluded enforcement settlements and other EPA online enforcement databases with settlement information could be made available to IRS. EPA believes that these mechanisms are more cost effective than developing a specific notification process for IRS. While we agree that EPA has mechanisms in place to provide IRS a means to access its settlement information, we believe that it would be useful if EPA notified IRS directly of its significant settlements contemporaneously so IRS could ensure that it is aware of all significant settlements and be better positioned to contact companies sooner to initiate pre-filing agreements with them. Regarding our reference to IRS officials needing access to information such as negotiating files and documents to help determine the proper tax treatment of settlement payments, the Assistant Administrator expressed concern that making such information available to IRS could result in a waiver of any protective privilege associated with such information and might jeopardize pending settlements and ongoing enforcement actions. This issue was not within the scope of our study and in our view is among the type of issues that can be addressed as IRS and agency officials work together to establish information sharing arrangements regarding significant settlement agreements. The Assistant Administrator also commented on how we characterized the value of EPA settlements and, in particular, stated that our comparison of EPA settlement values to those of the other agencies we surveyed is dissimilar. The Assistant Administrator said that we should only include monetary payments for EPA civil penalties in valuing EPA settlements to make them comparable to the value of settlements in the other agencies. In our view, and as consistently reflected in our report, the value of an agency’s settlements includes all components that are reflected in settlement agreements. This was also consistent with how the agencies we surveyed valued their settlements. We believe it would be misleading to show the value of settlements based on civil penalties alone when the negotiated settlement agreement clearly included other components. Further, some settlements we reviewed, such as DOJ FCA settlements, did not contain penalties. EPA also made some technical comments which we have incorporated into the report to clarify and more fully present certain information. In a letter dated September 1, 2005, an SEC Enforcement Division director did not specifically comment on our recommendation but said that the Commission takes seriously the importance of meaningful sanctions in its enforcement program (see app. V). HHS provided a letter stating they had no comment on the draft but sent technical comments which we incorporated into our report (see app. VI). DOJ provided some technical comments which we included in our report to more accurately reflect information about their settlements. As agreed with your offices, unless you publicly release its contents earlier we plan no further distribution of this report until 30 days from its date. At that time, we will send copies to interested congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. The objectives of this report were to (1) identify federal agencies that negotiated some of the largest dollar civil settlements in recent years, (2) determine whether the selected federal agencies having some of the largest civil settlements take the tax consequences of the companies into account when negotiating civil settlements and officials’ views on whether they should address the deductibility of payments in the agreements, (3) determine whether the companies that paid some of the largest civil settlement payments deducted any of the payments on their federal income tax returns, and (4) determine what information the Internal Revenue Service (IRS) collects on companies with civil settlements reached by federal agencies. In addition, we sought to identify whether companies’ deductions for settlement payments were being examined in audits and the outcome of the audits. To identify federal agencies that negotiated civil settlements involving companies with some of the largest civil settlement payments, we analyzed information on settlements reached by various federal agencies because we were unable to identify any single, reliably searchable, comprehensive source or database that was known to contain such information governmentwide. We limited our scope to settlements that were negotiated in fiscal years 2001 and 2002 involving companies that file IRS Form 1120, U.S. Federal Corporate Tax Return. We selected this time frame since it would allow the settling companies time to pay the settlements; determine applicable tax treatments, if any; and file federal income tax returns. As a starting point to identify agencies with large settlements in those years, we used information in the 1998 Federal Financial Management Status Report and Five-Year Plan that summarized assessments and collections of civil monetary penalties by federal agencies for fiscal year 1997. The information in the report was based on data compiled from 76 federal agencies and showed which of those agencies were responsible for the majority of the civil monetary penalty assessments and collections in fiscal year 1997. Consolidated information on federal agency assessments of civil penalties was not available for subsequent years because the Federal Reports Elimination Act of 1998 eliminated the annual requirements for federal agencies to report this information. Generally, we then sought to determine if the same agencies that were responsible for the majority of the civil monetary assessments and collections in fiscal year 1997 were likely to have some of the largest settlement amounts in fiscal years 2001 and 2002. We did this by reviewing such material as agency press releases on settlement agreements, annual reports, enforcement reports, and other data on agency Web sites. In addition, we also performed more general searches of commercially available databases that contain archived content from newspapers, magazines, legal documents, and other printed sources and other federal Web sites that provided information about corporate civil settlements to help us gauge whether the settlements we were identifying at these agencies were among the largest being reported from various publication sources. As part of our analysis of this information, we comparatively assessed, to the extent possible, whether agencies tended to have relatively fewer individual settlements with typically large-dollar assessments (millions of dollars per individual settlement) or more numerous individual settlements of relatively low-dollar amounts. We chose those agencies that appeared to have a larger settlement amount per case. We did not include IRS in the agencies we analyzed since tax settlements are not tax deductible. We also excluded the Federal Reserve System from consideration because its reported total settlement amounts could incorporate settlements by multiple agencies. By comparing and analyzing such information across the leading agencies for overall civil assessments in 1997, we selected the Environmental Protection Agency (EPA), the Securities and Exchange Commission (SEC), and the Department of Justice (DOJ) for further review after concluding that they were among those agencies responsible for negotiating the largest individual civil settlements in fiscal years 2001 and 2002 that we could identify. Also, during these 2 fiscal years, we determined that the Department of Health and Human Services (HHS) was involved in negotiating some of the largest dollar False Claims Act health- care-related civil settlements that DOJ has primary responsibility for negotiating. We contacted each of the four agencies and requested information on its largest civil settlements, that is, cases in which the largest dollar amounts were to be paid to the federal government or others. In discussing our request for lists of settlements, agency officials advised that lists of cases based on largest settlements would likely include cases of entities not required to file IRS Form 1120. (See table 1 in the body of this report for information received from the four agencies on their 20 largest civil settlements for fiscal years 2001 and 2002, which includes settlements with some entities not required to file IRS Form 1120.) We took several steps to assess the reliability of the agencies’ automated systems that provided the lists of settlement agreements. We interviewed agency officials who were knowledgeable about compiling, entering, and checking the data in the databases used to provide the lists; reviewed related documentation about the quality and accuracy of the data and the systems that produced them; and to the extent possible, cross-checked the lists with other sources. For example, we compared selected information, such as settlement amount from copies of the actual settlement agreements with the amount shown on the list obtained from the agencies. We also asked the companies to confirm this information. Likewise, the companies confirmed whether they had paid the settlement. We determined that the lists of largest settlements and associated settlement amount information were sufficiently reliable for the purposes of this report. To determine whether the federal agencies take the tax consequences of the companies into account when negotiating civil settlements and their views on whether they should address the deductibility of payments in settlement agreements, we interviewed officials in each of the four agencies about their settlement policies and negotiation processes. We obtained and reviewed the underlying agreements and documentation on the agencies’ policies, procedures, and processes for negotiating and structuring civil settlements with monetary payments. We also interviewed officials in the four agencies to determine if their settlement policies and procedures were different now than they were during fiscal years 2001 and 2002. We obtained documentation supporting any major policy or procedural changes that addressed how settlement payments are treated for tax purposes. To determine whether the companies that paid some of the largest civil settlement payments deducted any of their payments on their federal income tax returns, we developed a data collection instrument (DCI) to collect the information. We collected information from the four agencies on their largest dollar civil settlements, that is, cases that included payments to the federal government or others. Agency officials advised us that the lists of the largest settlements would likely include some settlements with entities that were not required to file IRS Form 1120. When such a settlement was among the 20 largest, we selected additional settlements that otherwise met our criteria. In contrast to SEC, HHS, and DOJ, from which we obtained information on the largest civil settlements payable to the federal government and other parties such as relators, EPA settlement amounts included costs incurred for companies to comply with environmental laws and regulations. We selected the largest EPA settlements that had a civil penalty because our focus was on how payments were treated for tax purposes. We requested copies of settlement agreements for the cases appearing on the lists from the agencies. We sent the DCI we developed to 44 companies for which we were able to obtain copies of the settlement agreements and find cognizant representatives who were familiar with the settlements and the tax treatment of the settlement payments and who agreed to participate in our survey. These 44 companies were required to file IRS Form 1120. In the end, we received DCI responses from 31 companies concerning 34 of the settlements. We told companies that we would only report information we collected in summary form so company names are not specifically identified. We examined the settlement agreements for the 34 settlements reached by companies that responded to our DCI to determine if they contained specific language that addressed how civil settlement payments are to be treated for federal income tax purposes. In those instances where we found specific language that addressed how civil settlement payments are to be treated for federal income tax purposes, we followed up with agency officials to corroborate how this treatment related to the specific agencies’ policies and procedures. The settlement agreements we examined are not a representative sample of settlements for these agencies in these fiscal years, and the results of our examination cannot be generalized to other settlement agreements. Likewise, the information we obtained through our DCI represents the responses of each company that voluntarily completed the instrument with regard to a specific settlement. Their responses cannot be generalized to any other population of settlements. Other than verifying the settlement amount and that the amount was paid by the companies when possible, we did not verify the other company responses to our survey questions. To determine what information IRS collects on companies with civil settlements reached by federal agencies, we interviewed knowledgeable officials from IRS and the four agencies and reviewed supporting documentation about what information, if any, IRS obtains from the four selected agencies regarding their civil settlement agreements. To determine the results of IRS’s audits of companies concerning the tax treatment of settlement payments, we obtained information from knowledgeable IRS auditing staff. An IRS technical advisor (TA) manager provided us readily available information on IRS’s industry groups in its Large and Mid-Size Business Division on the results of corporate audits where the deductibility of civil settlement payments was an issue. We conducted our work at EPA, SEC, HHS, DOJ, and IRS regional and headquarters offices, from February 2004 through June 2005 in accordance with generally accepted government auditing standards. According to selected information IRS provided on 46 companies that claimed settlement payment deductions on their income tax returns, IRS adjusted or proposed adjustments for approximately half of these companies. The 46 companies settled with varying agencies, including EPA, HHS, and DOJ. In the 24 cases for which IRS adjusted or proposed adjustments to the amount deducted as settlement payments on the tax return, the adjustments ranged from “not substantial” to 100 percent, according to the IRS examiners’ notes for the cases. According to IRS staff, only a portion of the amount listed as a settlement payment would be nondeductible. Because these portions would be deemed to be penalties, the balance would be a deductible compensatory expense. IRS collected this information under compliance research projects and from additional information from staff familiar with audits of companies in which the deductibility of settlement payments was an issue. This information, which covers multiple years, is limited to these particular companies. As IRS staff selected the 46 companies for audit or research because of potential noncompliance, these audit results cannot be projected to other companies with civil settlements. 1. We reviewed the text in our draft report and believe that it adequately distinguishes between monetary payments made directly to a governmental entity and costs to be incurred by a defendant as a consequence of performing actions required under a civil settlement agreement. To illustrate, a note to table 1 in our draft report stated that “For settlements identified by EPA, the total value of settlements included payments payable to the U.S. government; the estimated cost of any Supplemental Environmental Projects; and the estimated costs of pollution controls, monitoring equipment, or other complying actions that companies are required to take to come into compliance with environmental laws.” 2. As the Assistant Administrator suggested, we have revised our report to show that a proposed legislative provision mentioned in a footnote to disallow tax deductions for amounts paid to or at the direction of a government in relation to a violation was not included in the bill signed into law. However, our report shows that a new provision has since been introduced. In addition to the contact named above, Thomas Beall, Danielle Bosquet, Charlie Daniel, Keira Dembowski, Jeanine Lavender, Cheryl Peterson, Michael Rose, Amy Rosewarne, and Jennifer Wong made key contributions to this report. | Although some civil settlement payments are deductible, their deterrence factor could be lessened if companies can deduct certain settlement payments from their income taxes. GAO was asked to (1) identify federal agencies that negotiated some of the largest dollar civil settlements, (2) determine whether selected federal agencies take tax consequences into account when negotiating settlements and officials' views on whether they should address payment deductibility in settlement agreements, (3) determine whether companies with some of the largest civil settlement payments deducted any of the payments on their federal income taxes, and (4) determine what information the Internal Revenue Service (IRS) collects on civil settlements reached by federal agencies. The Environmental Protection Agency (EPA), Securities and Exchange Commission (SEC), and Department of Justice (DOJ) negotiated civil settlements that were among the largest in the federal government in fiscal years 2001 and 2002. Also, the Department of Health and Human Services (HHS) was involved in negotiating some of the largest dollar False Claims Act (FCA) health-care civil settlements for which DOJ has primary responsibility. The largest civil settlements at these agencies ranged from about $870 thousand to over $1 billion. Officials in the four agencies we surveyed said that they do not negotiate with settling companies about whether settlement amounts are tax deductible. They said it was IRS's role to determine deductibility. In preparing to negotiate environmental settlements, EPA and DOJ may consider certain tax issues in calculating the amounts they propose to seek. This calculation estimates a company's economic benefit, that is, the financial gain from not complying with the law. Some DOJ environmental settlements with civil penalties have language stating that penalties are not deductible. DOJ officials said since the law is generally clear that civil penalties paid to a government are not deductible, stating so in the agreement was merely restating the law and is not necessary. The majority of companies responding to GAO's survey on how they treated civil settlement payments for federal income tax purposes deducted civil settlement payments when their settlement agreements did not label the payments as penalties. GAO received responses on 34 settlements totaling over $1 billion. For 20 settlements, companies reported deducting some portion or all of their settlement payments. IRS does not systematically receive civil settlement information from all four agencies. IRS officials said that a permanent system for agencies to provide information would be useful. IRS obtains information on a case-by-case basis from public sources and agencies. IRS also has two temporary compliance projects focusing on tax issues that affect settlement payment deductibility. In 2004, IRS introduced a tax schedule to provide information on a company's fines, penalties, and punitive damages. |
As the steward of taxpayer dollars, the federal government is accountable for how its agencies and grantees spend hundreds of billions of dollars and is responsible for safeguarding those funds against improper payments. Our work over the past several years has demonstrated that improper payments are a significant and widespread problem in federal agencies. In addition, reports such as the Senate Committee on Governmental Affairs’ Government at the Brink and The President’s Management Agenda, Fiscal Year 2002, highlight the impact of improper payments on federal programs and the need for actions to strengthen the system of internal control over areas where improper payments occur. Our past reports have shown that relatively few agencies report improper payments in their financial statements, even though our audits and those of agency Offices of Inspector General (OIG) continue to identify serious improper payment problems and related internal control issues. Federal agency financial statements for fiscal years 1999 and 2000 show improper payments of about $20.7 billion and $19.6 billion, respectively. Along with this decrease in the total amount of improper payments reported, changes have occurred in the agencies reporting improper payments and in the programs identified with improper payments. During this same period, agency-specific audits and studies continued to indicate that the extent of the improper payment problem was much more widespread than had been disclosed in agency financial statements. For example, in March 2001 we reported that, during fiscal year 2000, the Internal Revenue Service (IRS), relying on past experience, screened tax returns claiming Earned Income Tax Credits (EITC) to identify (for detailed examination) those considered most likely to be invalid. IRS examiners performed detailed reviews of about 257,000 tax returns claiming approximately $587 million in EITC and found that about 173,000 of those tax returns claiming $395 million in credits (67 percent) were invalid. At the Department of Defense (DOD) the OIG noted that, during fiscal years 1999 and 2000, the Defense Finance and Accounting Service (DFAS) overpaid contractors about $183 million and $148 million, respectively, as a result of inadvertent errors, such as paying the same invoice twice and data input errors. None of these amounts show up in our improper payment totals because neither the IRS nor DOD financial statements reported improper payments for those programs for those years. The basic or root causes of improper payments can typically be traced to a lack of or breakdown in internal controls. Internal controls are an integral component of an organization’s management that are intended to provide reasonable assurance that the organization achieves its objectives of (1) effective and efficient operations, (2) reliable financial reporting, and (3) compliance with laws and regulations. The President’s Management Agenda, Fiscal Year 2002, includes five governmentwide initiatives—one of which is improved financial management. This initiative calls for the administration to establish a baseline on the extent of erroneous payments. Under it, agencies were to include, in their 2003 budget submissions to OMB, information on improper payment rates, including actual and target rates, where available, for benefit and assistance programs over $2 billion. The agenda also notes that, using this information, OMB will work with agencies to establish goals to reduce improper payments identified in the programs. In addition, the agenda included specific program initiatives for HUD and the Department of Education that addressed improper payments. In July 2001, OMB issued revisions to OMB Circular A-11, Preparation and Submission of Budget Estimates, requiring 16 federal agencies to include certain improper payment information for about 50 programs in their initial budget submissions to OMB. (Appendix I lists these programs.) We reviewed fiscal year 2001 financial statement reports prepared under the CFO Act, as expanded by the Government Management Reform Act, and OMB guidance to identify improper payments reported. (Appendix II lists the agencies covered by the CFO Act and the OMB guidance.) We also identified and reviewed recent reports by us and by agency OIGs to identify additional agencies and/or programs that experienced improper payments. We reviewed the performance plans of the 15 CFO Act agencies required by OMB Circular A-11 to submit improper payment data, assessments, and action plans with their initial budget submissions to OMB. We reviewed these plans to identify improper payment information addressing the four reporting content elements required by GPRA (goals, measures, strategies, and procedures to validate performance data). Further, we reviewed GAO reports that focused on the status of federal agency actions in achieving key outcomes and addressing major management challenges at each of the 15 CFO Act agencies covered by OMB Circular A-11. (See app. III for a list of these reports.) Among other things, some of these reports often included sections on agency efforts to reduce fraud, waste, and errors in programs that reported improper payments. They also compared fiscal years 2001 and 2002 performance plans for consistency and assessed the progress reported in achieving these outcomes as well as the strategies agencies have in place to achieve them. Recent revisions to OMB Circular A-11 require selected agencies to report improper payment information in their initial budget submissions to OMB. In addition, one of the initiatives in the President’s Management Agenda, Fiscal Year 2002, called for agencies to establish a baseline on the extent of erroneous payments. We reviewed the Budget of the United States Government, Fiscal Year 2003, to assess the extent to which it contained the improper payment information agencies were to submit with their initial budget submissions to OMB and/or the baseline information requested in the agenda. Since little information was publicly available on agency actions to reduce improper payments, we reviewed agency responses you provided us to the June 2001 letters that you and the Chairman of the Senate Committee on Governmental Affairs sent to the heads and OIGs of the 24 CFO Act agencies. These letters asked the agency heads and OIGs to assess their improper payment efforts in the five areas outlined in our October 2001 report, Strategies to Manage Improper Payments: Learning From Public and Private Sector Organizations. These areas are (1) the control environment, (2) risk assessments, (3) control activities, (4) information and communications, and (5) monitoring. We also selected four CFO Act agencies (USDA, HHS, HUD, and SSA) for more detailed review of their efforts to reduce improper payments. These agencies accounted for over 97 percent of the improper payments reported in fiscal years 1999 and 2000 financial statements. At these agencies, we spoke to officials in the inspector general, chief financial officer, and program offices and obtained reports and other documentation evidencing actions that they have taken or are planning to take to reduce improper payments. We focused on obtaining information on the agency actions to reduce the improper payments reported in their financial statements and/or performance plans. We also obtained information on barriers that they encountered when attempting to develop and/or implement methodologies to reduce improper payments. Finally, we met with OMB officials and reviewed documents regarding OMB’s progress in implementing recommendations made in our prior report. This included a review of revisions to OMB Circular A-11 and correspondence and other guidance to agencies on improper payment- related issues. We performed our work from May 2001 through April 2002. Our work was conducted in accordance with generally accepted government auditing standards. We provided a draft of this report for comment to the Secretaries of HHS, HUD, and USDA, the Commissioner of SSA, and the Director of OMB. We received written comments from HHS, HUD, and SSA and have reprinted those comments in appendixes IV, V, and VI, respectively. USDA responded by e-mail and OMB provided oral comments. Improper payments are acknowledged to be a widespread and significant problem in the federal government with billions of dollars in such payments reported annually in agency financial statements and billions more identified in audit and other reports. For example, federal agency financial statements for fiscal years 1999 through 2001 show improper payments of about $20.7 billion, $19.6 billion, and $19.1 billion, respectively. Although significant, these amounts are not indicative of the magnitude of improper payments governmentwide. Currently, relatively few agencies report improper payments in their financial statements, even though our audits and those of agency OIGs continue to identify serious improper payment problems and related internal control issues. The following table summarizes improper payments reported in agencies’ fiscal years 1999, 2000, and 2001 financial statements. The dollar amount of improper payments reported annually for fiscal years 1999 through 2001 decreased by about $1.7 billion and the number of agencies reporting a specific amount of improper payments in their financial statements declined from 8 to 6. A review of the table above shows that, for fiscal years 1999 and 2000, 8 agencies collectively reported $20.7 billion and $19.6 billion, respectively, whereas for fiscal year 2001, 6 agencies collectively reported improper payments of about $19.1 billion. About $18.8 billion (99 percent) of the improper payments reported in the fiscal year 2001 financial statements occurred in the programs administered by HHS, HUD, and SSA. In total, 13 agencies acknowledged making improper payments or reported a specific amount in their financial statements within the 3-year time frame. Ten of the 13 agencies reported or acknowledged making improper payments for fiscal year 2001. A comparison of the fiscal years 2001 and 2000 improper payment information reported in agency financial statements revealed several significant differences in the programs reporting improper payments and the amounts reported. In fiscal year 2000, USDA’s Food and Nutrition Service’s (FNS) financial statements identified improper food stamp payments of $1.1 billion. For fiscal year 2001, FNS did not publicly issue separate financial statements. While USDA’s financial statements contained FNS’s financial information and recognized that improper payments occurred in the food stamp program, the statements did not identify a specific improper payment amount. HUD reported improper payments of $1.25 billion in fiscal year 2000 and $2 billion in fiscal year 2001. Specifically, in fiscal year 2000 it estimated $1.94 billion in annual housing subsidy overpayments and $.69 billion in underpayments. In fiscal year 2001, it reported overpayments of $2.65 billion and underpayments of about $.65 billion. In commenting on this report, HUD noted that, in fiscal year 2000, it also identified $617 million in improper payments due to underreporting of tenant income. We do not include this amount in the HUD total in table 1 because HUD’s fiscal year 2000 financial statement notes that the $1.25 billion and $617 million “should not be considered totally additive.” An unknown amount of overlap exists in these amounts. Regarding the increase in improper payments reported since fiscal year 1999, HUD revised its methodology for measuring the types of errors that make up its improper payments estimate. In fiscal year 2000, it expanded the scope of its error estimation to include subsidy determination errors by its administrative intermediaries in addition to the impacts of tenant underreporting of income. In fiscal year 2001, it refined its methodology to obtain a combined estimate of both types of errors. More specifically, HUD’s error measurement methodology covers errors made by public housing authorities, owners, and agents (POAs) in determining tenant income and rent as well as errors made by the tenants in reporting their income. Past estimates only considered the impact of tenants underreporting income for amounts over $3,000 and used a sample of tenants from HUD’s data systems. However, the fiscal year 2001 estimate was based on more stringent criteria. It considered tenant underreported income for amounts over $1,000 and was based on a random selection of all tenants, including those who were not covered in the past. At the Office of Personnel Management (OPM), the fiscal years 1999 and 2000 financial statements identified improper payment amounts for the retirement, federal employees’ health benefits, and federal employees’ group life programs. The fiscal year 2001 statements did not identify improper payment amounts, but recognized that an unidentified amount of improper payments occurred in the retirement program and federal employees’ health benefits. At the Department of Labor, the fiscal year 2001 financial statements identified the total amount of improper payments for three of its programs but did not separately identify the improper payments relating to each program—as it had done in the past. Recent audits as well as information provided by agency OIGs continue to demonstrate that improper payments are much greater than has been disclosed thus far in financial statements. For example, historically, the IRS’s EITC program has been vulnerable to high rates of invalid claims. IRS follows up on only a portion of the suspicious EITC claims it identifies. The amount of improper payments included in the almost $26 billion IRS disbursed for EITC in fiscal year 2001 is unknown. However, based on an IRS report of the estimated $31.3 billion in EITC claims made by taxpayers for tax year 1999, an estimated $8.5 billion to $9.9 billion (27 percent to about 32 percent) should not have been paid. Weaknesses in IRS’s controls over refund disbursements, particularly those related to EITC, continue to expose the federal government to material losses due to disbursing improper refunds. Similarly, while DOD reported improper payments related to the Military Retirement Fund for fiscal years 1999, 2000, and 2001, departmentwide estimates of improper payments remain unreported in the financial statements. For example, over the last several years DOD has overpaid its contractors by hundreds of millions of dollars. Specifically, according to DFAS Columbus (the largest centralized DFAS disbursing activity) records, in fiscal year 2001 DOD contractors refunded about $128 million primarily attributed to DFAS payment errors and duplicate invoices. This amount might not reflect total improper payments DOD made to contractors because contract reconciliation is likely to identify additional overpayments. Further, although small in relation to the approximately $78 billion that DFAS Columbus disbursed in fiscal year 2001 to DOD contractors, this amount represents a sizable amount of cash in the hands of contractors beyond what is intended to finance and pay for the goods and services DOD purchases, and is indicative of the need for stronger internal controls within the payment system. Periodically and consistently estimating the rate and/or amount of improper payments and publicly reporting progress enables agencies and others with oversight and monitoring responsibilities to measure progress over time and determine whether further action is needed to minimize future improper payments. It enhances accountability by identifying performance measures and progress against those measures and by helping to establish performance and results expectations. Improper payment information is currently reported in a variety of places, including annual financial statements, performance plans, and the budget. However, neither the financial statements, as previously discussed; the performance plans; nor the budget provide a comprehensive view of either the scope of the improper payment problem or of individual agency or governmentwide efforts to reduce it. As such, they provide limited information for use in establishing (1) appropriate response levels to correct the problems or (2) responsibility—holding organizations and/or individuals accountable for performance and results. GPRA requires agencies to prepare annual performance plans that inform the Congress and the public of (1) the annual performance goals for agencies’ major programs and activities, (2) the measures that will be used to gauge performance against these goals, (3) the strategies and resources required to achieve the performance goals, and (4) the procedures that will be used to verify and validate performance information. Agencies develop plans for use by agency officials, the administration, the Congress, and the public. They provide information on the purpose and effectiveness of federal programs and on the resources spent in conducting them. On February 14, 2001, the Director of OMB issued a memorandum to agency heads requiring agencies to update their fiscal year 2002 performance plans to include performance goals for the President’s governmentwide reforms for every reform that would significantly enhance the administration and operation of the agency. One of these reforms is reducing improper payments to beneficiaries and other recipients of government funds. We did not determine the level of significance of improper payments at any agency. However, as a result of the memorandum, we expected improper payment-related actions to have been discussed in agencies’ fiscal year 2002 performance plans, at least for those agencies required to report improper payment information in their initial budget submissions to OMB. We reviewed the plans for improper payment-related issues. In general, our review revealed that none of the 15 performance plans examined contained detailed information for all of the areas that GPRA requires agencies to address—goals, measures, strategies, or procedures to validate performance data—for each reform discussed in the performance plan. Only 4 of the plans comprehensively addressed any of the four areas in their improper payments discussion. Table 2 summarizes our evaluation of the extent to which the annual performance plans contained improper payment-related discussions for the four areas GPRA requires to be addressed—goals, measures, strategies, and procedures to validate performance data—for the 15 CFO Act agencies required by OMB Circular A-11 to report improper payment information in their initial budget submissions. This evaluation was based on the performance plan assessments contained in the separate agency reports that we issued last year. (Appendix III lists these reports.) We considered an agency to have comprehensively addressed goals, measures, strategies, and procedures to validate performance data if our report did not reveal any weaknesses for how the performance plan addressed each of those elements for improper payment-related issues. We found that, although 10 of the 15 agencies discussed improper payments in their fiscal year 2002 performance plans, none comprehensively addressed improper payments for all four of the plan elements required by GPRA. Furthermore, only 4 of the 15 agencies comprehensively addressed improper payments for any of the GPRA- required elements. In addition, six performance plans discussed at least one of the elements but not comprehensively. That is, the reports acknowledged improper payments and cited some information regarding one or more of the elements, but that information was not adequate to use as a basis for evaluating agency actions or progress in addressing improper payment problems. Further, four plans made no reference to improper payments. Our key outcomes reports noted the following examples of weaknesses in agency performance plans. Within HHS, the Centers for Medicare and Medicaid Services (CMS) had adequate procedures to validate performance data, but the strategies needed to achieve its improper payment goals were not adequately addressed and these goals were not consistently measurable. In some instances, the plan stated generally that the accomplishment of a goal was the target and did not explain, in sufficient detail, CMS’s strategies to ensure that the goal is accomplished. In others, progress was difficult to measure because of continual goal changes that were sometimes hard to track or that were made without sufficient explanation. Specifically, in both the fiscal years 2001 and 2002 performance plans, goals were dropped, revised or subsumed into other goals, or goals were added for the Medicare program integrity outcome. While refinements may be desirable as efforts become more mature, the inability to track individual initiatives makes it difficult to measure progress in achieving outcomes. Furthermore, because many of the baselines and measures for the new and revised goals were under development, CMS’s intended performance regarding them was unclear. IRS’s EITC program under Treasury has historically been vulnerable to high rates of improper refunds—paying billions of dollars for improper EITC claims. Treasury’s performance plan did not report on performance measures for any aspect of IRS’s administration of the EITC, and IRS lacked performance measures for the program. Therefore, we are unable to assess progress toward achieving less waste, fraud, and error in the program. The performance plan noted that, in 1998, IRS began implementing a 5-year EITC compliance initiative that involved several components directed at the major sources of EITC noncompliance. While IRS is collecting data on the initiative’s results, the data are not yet sufficient to determine whether the initiative has reduced the overall noncompliance rate. SSA’s annual performance plan includes several goals and performance measures targeted specifically at increasing program integrity and reducing fraud and abuse. Yet the performance plan was not clear about SSA’s progress in meeting these goals because of continued revisions to prior indicators and goals as well as SSA’s inability to provide timely performance data. Since the conclusion of our fieldwork, some agencies have issued their annual performance plans for fiscal year 2003. Some of those plans may have addressed improper payments more thoroughly. In future work, we plan to review these plans for information on improper payments and compare fiscal years 2002 and 2003 performance plans for consistency and to assess the progress reported in achieving improper payment-related outcomes and strategies. Although no specific requirement exists for public reporting on improper payment-related activities at the agency level, the administration has recognized the importance of reducing governmentwide improper payments. The President’s Management Agenda, Fiscal Year 2002, discusses the reduction of improper payments as a key element under its initiative to improve financial performance within the government. As a result of this initiative, OMB revised its Circular A-11 to incorporate needed efforts to address improper payments within 16 selected federal agencies and about 50 programs within those agencies. Section 57.3 requires the selected agencies to include specific improper payment-related information in their fiscal year 2003 initial budget submissions to OMB. More specifically, the circular states that agencies that currently estimate improper payment rates for the programs identified are required to submit the following data: estimated improper payment rates projected for fiscal year 2001; actual improper payment rates for fiscal years 1999 and 2000, if target rates (goals) for improper payments for fiscal years 2002 and causes of improper payments; variances from targets or goals that were established; and descriptions and assessments of the current methods for measuring the rate of improper payments, and of the quality of data resulting from these methods. The circular also requires each of these agencies to submit an assessment of the effectiveness of current agency efforts to minimize improper payments as well as an action plan that includes additional actions the agency could take to prevent and correct an evaluation of the costs and benefits of implementing these corrective a description of programmatic and legal considerations, and an assessment of the extent to which undertaking these actions would hinder the achievement of major program objectives. For programs administered by states or other organizations for which agencies are not currently estimating improper payment rates, the circular requires each agency to submit an analysis of whether and how improper payments could be estimated and of the costs and benefits of collecting new or additional data. In preparing their responses, agencies were told to consider programmatic and legal obstacles to collecting additional data or establishing estimation procedures. Both the circular and the President’s Management Agenda, Fiscal Year 2002, note that OMB plans to review the information provided and coordinate with each agency to develop detailed action plans on a program-by-program basis. Furthermore, in August 2001, OMB distributed a memorandum to agency CFOs and budget officers containing supplemental guidance on submitting the improper payment information required by OMB Circular A-11. Among other things, it identified eight basic principles all federal agencies should recognize to minimize improper payments. The principles are prevention is more effective than after-the-fact efforts, program payment integrity is a joint management responsibility, improper payments should be kept to the lowest practical level, payments should be balanced with program goals and other competing controls should take into account both the benefits and costs, performance measurement and reporting provide better accountability, data verification strengthens program payment integrity, and impediments to effective controls may exist and should be considered. The discussion of one of these principles—performance measurement and reporting provides better accountability—further notes that “Public reporting of progress enhances accountability. Agency performance can be reported in a variety of places, including reporting under the Government Performance and Results Act, annual financial reports, or regularly-issued stand-alone program reports.” The memorandum also requires documentation to support any conclusion that estimating improper payments is unnecessary or would not be cost-beneficial because the program is not susceptible to significant improper payments, has strong internal controls to prevent improper payments, or has not experienced an improper payment problem, or that the burden outweighs the benefit to be gained by developing estimates. OMB is currently analyzing the submissions and revising the requirements based on feedback from agencies. In addition, other OMB initiatives include (1) working with the Congress on legislation to improve agency access to data for data sharing and drafting related agency guidance, (2) refining the OMB Circular A-11 guidance on reporting improper payment activity, (3) funding improper payment activities in the budget, (4) establishing electronic government Web sites including GovBenefits—which should improve the up-front accuracy of benefit determinations, and (5) assessing quarterly executive branch management scorecards to track how well agencies are executing the President’s management initiatives. These actions are appropriate for tracking and managing progress in this area. Unfortunately, the vehicle being used to assemble these data inhibits public disclosure of the information. OMB Circular A-11 requires selected programs and agencies to submit improper payment data with their initial fiscal year 2003 budget submissions but Section 36 of the circular prohibits the submissions from being publicly disclosed. Therefore, we reviewed the Budget of the United States Government, Fiscal Year 2003, to determine the improper payment information it contained. Since OMB incorporates the individual agencies’ budget requests into the budget and since the administration has made the reduction of improper payments a priority, we expected to find some improper payment information in the budget. Our review showed minimal discussion of improper payments as compared to the detailed information OMB Circular A-11 requires agencies to provide in their initial budget submissions. For example, even though OMB Circular A-11 requires 15 CFO Act agencies to provide improper payment information on about 50 programs, the budget shows actual improper payment rates for fiscal year 2000 for 2 programs—food target error rates for food stamps and Supplemental Security Income, types and causes of improper payments for the Department of Education’s Student Financial Aid Program and HHS’s Medicare and Medicaid programs, and a description of additional actions 6 agencies could take to prevent or correct improper payments for 8 programs. Furthermore, the Budget did not contain information for any agency for several areas cited in OMB Circular A-11, including an analysis and description of whether and how improper payments could be estimated, an analysis of the costs and benefits of collecting new or additional data, and obstacles to collecting additional data or establishing estimation procedures. Given the fact that the agency financial statements, fiscal year 2002 performance plans, and the budget contained little substantive information on improper payments, we attempted to locate other data that might offer added insights into agency efforts. One source was agency responses to June 2001 congressional requests to agency heads and OIGs for information on agency efforts to control improper payments. The requests, from the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs, asked for specific information about the five components of internal control—control environment, risk assessment, control activities, information and communication, and monitoring— outlined in our report that addressed strategies to manage improper payments. The congressional requesters received responses from either the agency head, the IG, or both for 9 agencies. Specifically, we found that, for the 15 CFO Act agencies required to report improper payment information under OMB Circular A-11, 9 agency heads and 8 OIGs responded to the congressional request. For 6 of the agencies, neither the agency head nor the OIG responded. Of those that did respond, 5 agency heads and 2 OIGs addressed all of the internal control components, as requested, as demonstrated in the following examples. The responses of the Secretary and OIG of HUD show, among other things, how HUD (1) promoted an environment of accountability to reduce improper payments by establishing the Rental Housing Integrity Improvement Project (RHIIP) to help ensure that the “right benefits go to the right person,” (2) estimated its improper payments in the past and is now implementing plans for a more comprehensive error measurement process, and (3) reduced the risk of improper payments by implementing new rent calculation systems and performing data matches with IRS and SSA data. The Secretary of HHS’s response showed how the agency (1) addressed the control environment through numerous oversight and program integrity activities, (2) computed an error rate for Medicare fee-for- service claims and determined the cause of these improper payments, (3) established a GPRA goal to reduce the percentage of improper payments made under the Medicare fee-for-service program, (4) worked on a methodology to measure improper payments for the Medicare Managed Care and Medicaid programs, (5) assisted medical providers in submitting claims correctly, and (6) developed statistical analyses to stem fraud, waste, and abuse. The Acting Commissioner of SSA explained how SSA (1) created a culture of accountability through the day-to-day operation of its compliance program, (2) estimated payment errors through stewardship reviews, (3) detected and deterred improper payments through a series of system enhancements, (4) collected improper payments by using various methods such as credit bureaus and the Treasury offset program, and (5) established payment accuracy goals and methods to track progress as part of its annual performance plan. We expected agencies to have accurate and timely information available to respond to the congressional request since, in February 2001, OMB had asked these agencies to address their improper payment issues in their fiscal year 2002 performance plans. However, for the most part, the responses provide only partial answers showing that, while the agencies agreed that managing improper payments is important, they lacked comprehensive strategies to do so. For example, one agency answered all of the questions, yet indicated that it did not know the aggregate amount of improper payments made on a departmentwide level and that the most recent estimate of improper payments for one of its high-risk programs was from 1997. Another agency stated that it has not performed a risk assessment and has no formal process to estimate or track improper payments because it has an inherent culture of high standards, operating efficiency, sophisticated systems, and personal attention to detail, which results in few improper payments. A third agency could not provide substantive answers to the request. Rather, it stated that while audits have recommended improving internal controls, it does not believe they disclosed an unacceptable level of risk. This agency did not estimate the amount of its improper payments or provide details of any related risk assessments. The Congress and the administration have clearly indicated that agencies should consider the reduction of improper payments a top priority. Despite this focus, little improper payment information is publicly available. Public disclosure provides information against which agency efforts to reduce improper payments can be measured and evaluated. It can also help form a basis for holding agency officials, the administration, and the Congress accountable for actions that reduce improper payments and improve program performance. The USDA, HHS, HUD, and SSA collectively reported about $19.1 billion of improper payments in their fiscal year 2000 financial statements. Individual amounts of improper payments reported ranged from $1.1 billion for the food stamp program at USDA to $12.5 billion for Medicare-related payments at HHS. Because of the magnitude of the amounts reported, we contacted representatives at each agency to determine their efforts to reduce and manage improper payments. Each agency has been actively working to address its improper payment problems. These efforts typically involved activities related to the five components of internal control—control environment, risk assessments, control activities, information and communications, and monitoring. The following sections highlight some of the efforts undertaken by these agencies. The control environment is perhaps the most critical element in reducing improper payments because it establishes a culture of accountability and assigns responsibility for actions. A sound control environment stresses the importance of prevention of improper payments and efficient and effective program operations while maintaining a balance with privacy and information security in a world where most payments are made electronically. In establishing a sound control environment, agency management recognizes that personnel throughout the organization make internal controls work and, therefore, human capital issues must be seriously considered in all changes to the system of internal control. As noted in our report on strategies to manage improper payments, changes in the control environment may require actions by both the Congress and agency officials. These actions can include enacting legislation, setting and maintaining the ethical tone, delegating roles and responsibilities, and implementing human capital initiatives. Legislative and management actions that affected the control environment over improper payments occurred at each of the agencies. Legislative actions involved passing laws that revised program operations and called for various prevention and detection methodologies and periodic reporting on the status of agency improvement efforts. For example, the Agriculture Risk Protection Act of 2000 authorizes additional resources to assist the Risk Management Agency’s (RMA) Federal Crop Insurance Corporation (FCIC) in identifying fraud, waste, abuse, and mismanagement in its programs; helps FCIC collect bad debts by imposing severe penalties and interest and offsetting future benefit payments for those who willfully and intentionally provide false or inaccurate information with respect to a policy or plan of insurance; and requires RMA’s Office of Risk Compliance to use data mining, data warehousing, and data reconciliation to identify potential improper payments and provides up to $23 million in funding for these efforts through fiscal year 2005. Two legislative reforms have helped the HHS’s CMS enhance Medicare’s anti-fraud and abuse activities. First, the Health Insurance Portability and Accountability Act of 1996 established the Medicare Integrity Program, which provides CMS with levels of funding for Medicare program safeguard activities such as audits of cost reports and medical prepayment claim reviews. In the cost report area alone, CMS reported $570 million and $493 million in improper payments for fiscal years 2000 and 2001, respectively. In addition, the Balanced Budget Act of 1997 provided CMS with increased authority to keep health care providers who have been convicted of health care related crimes out of the Medicare program, exclude providers who abuse the program, and impose monetary penalties on such providers. At SSA, the Foster Care Independence Act of 1999 helped strengthen program integrity by authorizing SSA to conduct matches with Medicare data and simplify procedures to gain access to recipient records from financial institutions to help verify Supplemental Security Income (SSI) recipients’ financial eligibility; authorizing SSA to prohibit individuals who provide false or misleading eligibility information from collecting Old Age and Survivors Insurance (OASI) and Disability Insurance (DI) and SSI cash benefits; making a representative payee—a person authorized to receive benefit payments for a qualified individual—liable for OASI and DI or SSI overpayments caused by payments made to deceased beneficiaries; and authorizing SSA to use all available debt collection authorities to recover SSI debt. Equally, if not more important, an effective control environment requires management’s commitment to reduce improper payments. Agency management can affect the control environment by, among other things, setting expectations and goals for reducing improper payments, implementing program-specific measures to reduce fraud and errors, calling for periodic performance reporting, and requiring follow-up actions based on performance results. For example, USDA’s FNS administers the food stamp program under which state welfare agencies certify eligibility and provide benefits to households. Although not reported in USDA’s financial statements, FNS identified about $976 million in food stamp program overpayments for fiscal year 2001. FNS strives to increase the accuracy of eligibility determinations and benefit computations and also oversees the level of benefits issued. Its food stamp Quality Control System (QC) measures the accuracy of eligibility determinations and benefit computations and then publicly reports each state’s overissuance rate. FNS reviews these data to identify areas needing corrective action and practices that are effective in improving payment accuracy. USDA management also encourages state agencies to minimize improper payments by offering financial incentives for those with high payment accuracy and imposing sanctions on those with payment overissuance rates above the national average. In fiscal year 2000, FNS imposed $46 million in financial sanctions on 18 states with overissuance rates above the national average. At the same time, it provided $55 million in supplementary funding to 11 states with payment overissuance rates equal to or below 5.9 percent—a rate well below the national average of about 8.9 percent. These actions have resulted in a decline in state payment error rates from 10.7 percent in fiscal year 1998 to 9.9 percent in fiscal year 1999, to 8.9 percent for fiscal year 2000. However, FNS and OMB officials believe that the Farm Security and Rural Investment Act of 2002 will likely reduce the number of states sanctioned in future years, as only those with persistently high rates of improper benefit and eligibility determinations (those exceeding 105 percent of the national performance measure for 2 or more consecutive fiscal years) would be penalized. SSA management demonstrated its commitment to reduce improper payments in its 1997 strategic plan, Keeping the Promise. One of the strategic goals cited in the plan is to make SSA program management the best in business with zero tolerance for fraud and abuse. To achieve this goal, SSA initiated a program of anti-fraud efforts to eliminate wasteful practices that erode public confidence in the Social vigorously prosecute individuals or groups who damage the integrity of change programs, systems, and operations to reduce instances of fraud. Senior SSA management oversees the implementation and coordination of these fraud elimination strategies. At the local level, each SSA region has a Regional Anti-Fraud Committee that acts as the focal point for the agency’s effort to combat fraud. At HUD, management took steps to reduce errors in the rental housing assistance programs by establishing the Rental Housing Integrity Improvement Project. A RHIIP advisory group develops and implements plans to reduce program errors and correct related material management control deficiencies in HUD’s high-risk subsidized rental housing programs. According to a HUD official, the advisory group has taken steps to increase HUD’s income data matching authority and utilization to enable upfront income data sharing to avoid subsidy errors attributed to unreported and underreported income sources. In addition, a RHIIP subgroup develops rent calculation software and proposals for program simplification. Risk assessment is a key step in gaining assurance that programs are operating as intended and that they are achieving their expected outcomes. It entails a comprehensive review and analysis of program operations to determine where risks exist, what those risks are, and the potential or actual impact of those risks on program operations. The information developed during a risk assessment forms the foundation or basis upon which management can determine the nature and type of corrective actions needed. It also gives management baseline information for measuring progress in reducing improper payments. In performing a risk assessment, management should consider all significant interactions between the entity and other parties as well as internal factors at both the entitywide and activity levels. The specific risk assessment methodology used can vary by organization because of differences in missions and the methods used in assigning risk levels. As we noted in the improper payment strategies report cited earlier, risk identification methods often include qualitative and quantitative ranking activities, management conferences, forecasting and strategic planning, and consideration of findings from audits and other assessments. The information obtained from the four agencies we visited revealed frequent use of similar risk assessment activities. USDA’s FNS conducts annual quality control reviews to identify the extent and causes of improper payments in several of its programs, including the food stamp program. The two most recent reviews estimated overpayments of $1.1 billion and $976 million in fiscal years 2000 and 2001, respectively. These reviews provided more detailed information about the causes of the improper payments. For example, the report of fiscal year 2000 payments found that about 56 percent of the overpayments and underpayments in the food stamp program occurred when state food stamp workers made mistakes such as misapplying complex food stamp rules in calculating benefits. The remaining 44 percent of the errors occurred because participants, either inadvertently or deliberately, did not provide accurate information to state food stamp offices. HHS measures improper payments within the Medicare fee-for-service program and estimated improper payments in this program of $11.9 billion and $12.1 billion for fiscal years 2000 and 2001, respectively. Further, the agency reports that the Medicare fee-for-service claims error rate was reduced to 6.3 percent in fiscal year 2001 from 6.8 percent in fiscal year 2000 and 7.97 percent in fiscal year 1999. However, we cannot conclude that these error rate differences are statistically significant. As reported in the OIG fiscal year 2001 Medicare fee-for-service payments review, “The decrease this year may be due to sampling variability; that is, selecting different claims with different dollar values and errors will inevitably produce a different estimate of improper payments.” CMS has initiated projects to improve the precision of Medicare fee-for- service improper payment estimates and aid in the development of corrective actions to reduce improper payment losses. In fiscal year 2001, CMS implemented a provider compliance rate to measure the appropriateness of claims submitted prior to payments. In addition, CMS developed a comprehensive error testing program that will produce contractor-, provider-, and benefit-specific error rates. These error rates can be aggregated to add greater precision to the national level estimates similar to the Medicare fee-for-service error rate. GAO has designated HUD’s rental housing assistance programs as high risk since 1994. HUD has taken several actions to identify the risks associated with these programs and is working to further refine the procedures currently used to obtain more useful assessment information. In one example, HUD analyzed risk designed to measure postpayment accuracy. An annual study of rent calculation errors estimated the extent, severity, costs, and sources of rent errors for the Public Housing and Section 8 programs. The study, which relied on the integrity of the data supplied by the tenants and third-party income verification sources, matched independent determinations of tenants’ incomes, rents, and subsidies to those made by local public housing agencies (PHAs) and Section 8 staff to identify incorrect rental calculations due to administrative and mathematical errors. The study results, issued in June 2001, reported tenant rental underpayments of approximately $1.7 billion annually (an average of $95 per household) in 34 percent of households and tenant rental overpayments of over $600 million annually (an average of $56 per household) in 22 percent of households. HUD used the study results to strengthen its procedures for ensuring administrative compliance with regulations. In another study, HUD developed an approach to identify differences between tenant federal income tax data and the income tenants reported to HUD by using a large-scale computer matching income verification process. While initial results were effective in identifying certain errors in tenant reporting, HUD is currently developing different methodologies to improve the accuracy of this type of risk assessment. HUD recently began to expand the scope of its error measurement methodology to cover the three primary types of rental assistance program errors—public housing authorities, owners and agents income and rent determinations; tenant reporting of income; and POA billings to HUD for subsidy payments. The current error measurement methodology addresses the first two of these three components and, starting in 2003, HUD intends to annually measure and report on all three error components. HUD’s goal is to reduce processing errors and resulting improper payments by 50 percent by 2005. Once an organization has committed to reducing the risk of improper payments, identified program areas that are at risk, quantified the possible extent of the risk, and has set a goal for reducing the risk, it must act to achieve that goal. Control activities are the policies, procedures, techniques, and other mechanisms designed to help ensure that management’s decisions and plans are carried out. Control activities used by organizations to address improper payments vary according to the specific threats faced and risks incurred. The types of payment activities identified as presenting the most significant risk of improper payments and the kinds of data and other resources available dictate the specific actions pursued by individual entities. Additionally, the actions must comply with all relevant laws and strike a balance between the sometimes competing goals of privacy and program integrity. Given the large volume and complexity of federal payments and historically low recovery rates for certain programs, it is generally most efficient to pay bills and provide benefits properly in the first place. Aside from minimizing overpayments, preventing improper payments increases public confidence in the administration of benefit programs and avoids the difficulties associated with the “pay and chase” aspects of recovering improper payments. However, since some overpayments are inevitable, agencies also need to adopt effective detection techniques to quickly identify and recover them. Detection activities play a significant role not only in identifying improper payments, but also in providing data on why these payments were made and, in turn, highlighting areas that need strengthened prevention controls. The agencies in our study used many different prevention and detection control activities to manage improper payments. The nature of these activities ranged from sophisticated computer analyses of beneficiary and program participant data using data sharing and computer-editing techniques to on-site verification of claim information. Data sharing allows entities to compare information from different sources to identify inconsistencies and thus help ensure that payments are appropriate. For example, data matches of social security numbers and other data can help determine whether beneficiaries are inappropriately receiving payments at more than one address. For government agencies, data sharing can be particularly useful in confirming initial or continuing eligibility of participants in benefit programs and in identifying improper payments that have already been made. Of the four agencies included in our review, SSA is the most active in the data sharing arena. It performs over 20 data matches with over 10 federal agencies and more than 3,500 state and local entities. For instance, SSA shares data with HUD so that HUD can perform a match to verify the identity of recipients of housing benefits and identify potentially fraudulent claims. In addition to sharing data with other entities, SSA also uses data from other sources to perform matches to help prevent and detect improper payments in its programs. For example, it obtains death records from states to determine if deceased individuals are still receiving benefit checks. SSA estimates that it saves $350 million annually for OASI and DI, and $325 million annually for SSI through its use of data matching. Further, the savings are not limited to those realized by SSA. According to SSA, its matches save other agencies approximately $1.5 billion each year. According to SSA’s Performance and Accountability Report, Fiscal Year 2001, it uses computer matching and other payment-safeguard activities to assist it in finding and correcting improper payments and in identifying and deterring fraud in its entitlement programs. In commenting on our report, SSA noted that the OASI accuracy rate for fiscal year 2000 was 99.9 percent and the SSI accuracy rate was 94.7 percent. It did not provide a fiscal year 2000 DI accuracy rate. In continuing efforts to improve payment accuracy, SSA invested more than $1 billion in processing over 9 million alerts in fiscal year 2001. Current estimates indicate that these payment-safeguard activities detected or prevented about $7 billion in overpayments. Data mining is a computer-based control activity that analyzes diverse data for relationships that have not previously been discovered. The central repository of data commonly used to perform data mining is called a data warehouse. Data warehouses store tables of historical and current information that are logically grouped. Applying data mining to a data warehouse allows an organization to efficiently query the system to identify potential improper payments, such as multiple payments for an individual invoice to an individual recipient on a certain date, or to the same address. The large number of Medicare transactions precludes a manual examination of each transaction to identify associations and patterns of unusual activities, making data mining an effective and efficient alternative. CMS is currently involved in two data mining efforts. Its claims administration contractors currently use data mining and statistical analysis as part of their postpayment review activities. At the request of the states, CMS has also undertaken a Medicare/Medicaid data exchange project. This project’s goal is to use data mining to query data from both programs in an effort to find fraudulent or abusive patterns that may not be evident when billings for either program are viewed in isolation, but would become evident when they are compared. Computerized edit checks are used to ensure that valid and authorized transactions are recorded and executed according to management and program requirements. USDA’s RMA provides the regulations, crop policies, underwriting standards, and loss-adjustment standards for crop insurance policies, although private insurance companies deliver the actual crop insurance program. RMA’s crop insurance program uses a variety of computer-generated edit checks to ensure valid program requirements are met. These edit checks include ensuring that the liability was not increased at the time of loss, the cause of loss was insurable according to policy language, and the insurance company applied appropriate calculations to determine the loss payment. In addition, RMA matches each producer’s social security and employer identification numbers with the agent and loss adjuster to ensure that the producers have not been debarred from participating in the crop insurance program. Once information is accepted through the above edit processes, RMA loads it into databases where it is subject to further audit and review. The computerized data sharing and data mining efforts discussed in this report help identify improper payments by providing more useful and timely access to information. These techniques can result in significant savings by identifying client reporting errors and misinformation during the eligibility determination process—before payments are made—or by detecting improper payments that have been made. However, the extensive use of personal information in an evolving technological environment raises new questions about how individual privacy should be protected. In the federal arena, such activities must be implemented consistent with all protections of the Privacy Act of 1974, as amended by the Computer Matching and Privacy Protection Act of 1988, and other privacy statutes. Not every control activity identified involved computer applications. The agencies that participated in this review also used on-site visits and manual claims reviews to help reduce improper payments. For example, USDA’s Farm Service Agency (FSA) administers programs for the Commodity Credit Corporation (CCC). Among other income and commodity support programs, CCC indemnifies food producers for the extraordinary losses of crops or livestock resulting from weather-related disasters and pest infestations. Over 2,200 FSA local county offices are responsible for ensuring that producers provide reliable claim information. FSA performs random reviews of about 5 to 20 percent of producer-provided information to verify that, among other things, acreage has not been overstated. (These spot checks search for anomalies such as numbers outside of reasonable ranges.) Similarly, at HHS, CMS manually reviews Medicare claims to determine whether benefits are provided to eligible beneficiaries, charges are covered, and services are medically necessary and reasonable. Once an organization has identified its improper payment-related risks and undertaken activities to reduce them, federal officials with program management, oversight, and monitoring roles need relevant, reliable, and timely information to help them make operating decisions and monitor performance on a day-to-day basis and over time. For example, a major objective of the Federal Financial Management Improvement Act is to have systems that provide good cost accounting information that program managers can use in managing day-to-day operations. Managerial cost accounting is aimed at providing reliable and timely information on the full cost of federal programs, their activities, and outputs. This cost information can be used by the Congress and federal executives in making decisions about allocating federal resources, authorizing and modifying programs, evaluating program performance, and developing the information to support GPRA requirements. The need for information and communication extends beyond organizational boundaries. Educational activities for both beneficiaries and other program participants help reduce improper payments and strengthen program operations. Complex program regulations can be confusing to both agency personnel and beneficiaries and thus can potentially contribute to improper payments. The better educated agency employees, contractors, and beneficiaries are about what is expected of them and the consequences of not meeting those expectations, the greater the chances for reducing fraud and errors in the payment process. All four agencies visited educated recipients and service providers on complex program regulations using various mechanisms, including the Internet and printed materials. For example, at USDA, FSA maintains a Web site with program descriptions and information for producers. This site has hyperlinks to additional information, guidance, and contacts for FSA and CCC. It includes links to farm loan information, youth loans, disaster assistance, price supports, and conservation programs. Furthermore, CCC regularly sends out news releases explaining policies and procedures. Agencies also use printed materials to educate recipients and service providers. HUD publishes reference guides, handbooks, forms, and other tools for homeowners and lenders. Its OIG has also published fraud prevention guidance, Guidelines for Public Housing Authorities to Prevent, Detect and Report Fraud. At USDA, FNS publishes guidance that focuses on improving both access to the food stamp program and the accuracy of eligibility requirements for benefit determinations. For example, in September 2001, FNS updated its food stamp program fact sheet, which is distributed to applicants in state food stamp agencies and is available on its Web site. The fact sheet describes the rules and types of documentation that applicants will need to provide at the interviews to verify eligibility. SSA has also developed brochures and printed materials as part of its campaign to keep the public informed about Social Security programs. When discussing actions to reduce improper payments, officials at all four agencies cited barriers that restricted their ability to better manage their programs against improper payments. Generally, agency officials noted that they encounter barriers due to legislative provisions, program design factors, and resource limitations. It should be recognized that many of these barriers exist as a result of decisions to ensure beneficiary privacy and other data safeguards, the inherent nature of some federal programs, and budgetary realities. As a result, it may be difficult to eliminate or mitigate these barriers to the point where they no longer restrict agency actions in certain areas to better manage their improper payment problems. However, to the extent that that is the situation, federal agencies, the administration, the Congress, and the public must recognize that some level of improper payments will occur because of these decisions. This section of the report discusses these types of barriers. Legislative actions can give agencies the authority to implement activities to identify improper payments and, subsequently, to hold the responsible parties accountable. They can compel agencies to work together using common data to detect and prevent improper payments, and can authorize agencies to develop incentive programs to increase accuracy in program administration. Yet they can also limit an agency’s ability to take actions to reduce improper payments. Agencies trying to identify ineligible individuals receiving government benefits and hold them accountable have met with legislation-based barriers that limit their efforts to minimize improper payments. HUD officials told us that, to reduce improper payments in subsidized housing programs, they could benefit by having access, even if it is only limited access, to data from other federal agencies and by sharing relevant information with entities implementing HUD’s programs. However, they stated that the Internal Revenue Code and the Privacy Act of 1974 have prevented or made it difficult for HUD to obtain this information and have limited how HUD can use it. Specifically, HUD officials noted that the agency can only disclose federal tax data to the tenants and not to the POAs—the entities that determine monthly housing benefits based, in part, on income information. When HUD identifies discrepancies, it sends letters to the tenants notifying them of the discrepancies and directing them to submit revised income information to their respective POAs. At the same time, HUD notifies the POAs that discrepancies exist between the income in HUD’s tenant databases and federal tax data for specific tenants, but it is prohibited from identifying the specific amounts in question. HUD then requests that the POAs resolve the unspecified discrepancies and report the resolution to HUD. Data currency is also a factor. HUD receives taxpayer income data in September for the previous year and, by then, many of the beneficiaries were either no longer working, had changed jobs, or had moved. While more timely data are available, legislation prevents HUD from using it. For example, the HHS Office of Child Support Enforcement maintains the National Directory of New Hires containing employee wage data that is updated quarterly, versus the IRS data that is updated annually. However, Section 453 of the Social Security Act limits use of the data to those entities listed in the act, and HUD is not one of those entities. Some improper payments are inevitable because agencies are not permitted to stop or adjust payments until the due process hearing or appeals processes are completed. For example, SSA disburses SSI payments to recipients at the beginning of the month based on the income and asset levels recipients expect to maintain during the month. Some government programs pay benefits in advance under the assumption that the beneficiary’s circumstances, such as income and asset levels, will remain the same during the period for which payment was rendered. If SSA initially determines that an overpayment occurred, court decisions and language in the Social Security Act allow individuals to continue receiving the same amount of SSI and DI benefits pending the results of a hearing to determine eligibility. If the initial determination is affirmed, the payments made during the hearing and appeals processes are considered overpayments, which SSA may recover using a variety of means. USDA’s FNS faces a similar situation. FNS officials stated that the Privacy Act of 1974 has several disclosure prohibitions, access and amendment provisions, and record-keeping requirements that hinder its efforts to share information with other federal agencies and with state agencies. The Computer Matching and Privacy Protection Act of 1988 amended the Privacy Act of 1974 to add procedural requirements for agencies to follow when conducting computer matching. For example, agencies must provide matching subjects with opportunities to receive notice and to refute adverse information before having a benefit denied or terminated. Agencies must establish data protection boards to oversee the data matching activities. Exceptions to the disclosure requirements are possible but require a series of due process steps designed to validate the debt and offer the individual an opportunity to repay it. In commenting on this report, OMB officials told us that it prefers removing statutory barriers only when appropriate privacy safeguards are in place. Benefit or entitlement programs operated by the federal government in partnership with state or local governments or private intermediary organizations are particularly vulnerable to improper payments. Generally, the federal government provides broad statutory and regulatory guidelines as well as all or a part of the program funding, while the other entities manage the day-to-day program operations. As such, federal agencies must depend on state, county, and local officials and other entities to ensure that eligibility requirements are met and that benefit amounts are determined correctly. Further, these third-party organizations that manage federal programs often have little incentive to ensure that the right amounts go to the right individuals. Medicaid is the primary source of health care for 34 million enrollees, or about 12 percent of the U.S. population. In fiscal year 2000, federal and state Medicaid outlays totaled $207 billion—of which $119 billion represented federal expenses. Medicaid legislation provides states with a variety of options for program administration. They can elect to administer the program at the state or county level, and they can operate fee-for- service programs, managed care programs, or some combination of the two. States may also elect to operate their claims-processing systems directly or contract with private vendors. The variety and complexity of the state Medicaid programs provide challenges for federal oversight. CMS assists interested states in developing methodologies and conducting pilot studies to measure and ultimately reduce improper payments. However, according to CMS officials, only a limited number of states are interested in participating in these studies since they believe that measuring improper payments could lead to penalties against states based on their error rates. There are, however, some promising activities. Some states are devoting more resources to program integrity activities than they had previously and are obtaining more sophisticated computer analytic capacity to review payment trends and spot improper billing. Still others are implementing stricter health care fraud and abuse control laws and policies. HUD officials also face the problem of third-party management of a federal program and the lack of a financial benefit or other incentive to encourage the POAs to minimize improper payments. For example, HUD’s public housing programs are operated by over 3,000 PHAs, which operate under state and local laws but are funded by HUD. Initial rent determination is based on reported income levels. HUD officials stated that PHAs have little incentive to protect the interests of the government when determining the tenant benefit amount since it is easier to collect payments from HUD than from tenants. Thus, PHA’s have the incentive to keep the HUD payment portion as high as possible. Each of the agencies visited processes a large number of payments and claims and emphasizes providing benefits to needy individuals and families as fast as possible. At these agencies, officials noted that speed of service issues coupled with resource constraints can result in improper payments. For example, CMS contracts with health insurance companies to process 890 million Medicare fee-for-service claims each year and SSA processes monthly payments to approximately 51 million individuals. Officials at these agencies stated that resource limitations hinder their ability to perform oversight and monitoring functions, such as site visits and documentation reviews, to ensure that payments are valid. USDA’s RMA expressed similar concerns. Private insurance companies administer the crop insurance for RMA. These companies are responsible for educating the agents who sell crop insurance policies and the parties that purchase the policies. Improper payments can result when crop producers misunderstand the policies or when they detect program vulnerabilities and intentionally misuse the system. RMA has less than two investigators per state and over 1 million policies nationwide, making compliance with laws, policies, and procedures difficult to monitor. Also at USDA, CCC officials stated that there is no time for second-party review of the over 2,300 county offices administering the programs because staff size has decreased while the number of programs has increased over recent years. Legislative, program design, and resource barriers represent serious obstacles to an organization’s ability to effectively manage improper payments and affect the amounts of improper payments occurring in federal programs. They can be significant inhibitors that departments must face, but which they often do not have the ability to eliminate through independent actions. Addressing these barriers will require coordination and cooperation between federal agencies, state and local organizations, the administration, and the Congress. The magnitude of improper payments reported in agency financial statements, GAO and OIG audit reports, and other documents over the past 3 years clearly demonstrates the need for a governmentwide effort to remedy this situation. Many individual agencies have taken measures to address their improper payments during this period, yet the total amount reported has remained fairly constant at around $19 billion to $20 billion. As we noted in our report on strategies to manage improper payments, high levels of improper payments need not and should not be an accepted cost of running federal programs. Identifying and implementing steps to reduce improper payments will likely be difficult, time consuming, and costly. While individual agencies must be responsible for their own programs and related improper payments, the collective efforts of agency management, the administration, and the Congress are necessary to attack improper payments on an agency and governmentwide basis to achieve greater results. Each of these organizational bodies brings different perspectives and expertise to the solutions process, which, when consolidated, can help reduce the governmentwide improper payment problem. Further, once committed to a plan of action, all parties must remain steadfast supporters of the end goals and their support must be transparent to all. Within federal agencies, program, Chief Operating Officer (COO), CFO, Chief Information Officer (CIO), and IG offices have different missions and areas of responsibility. They also have the common goal of ensuring that federal programs and activities operate as effectively and efficiently as possible. Therefore, agencies would benefit by consolidating the program knowledge, expertise, and experience found in these various offices when developing and implementing controls to minimize improper payments. COOs are appointed by agency heads. They are responsible for providing overall organization management to improve agency performance. The COO has agencywide authority and reports directly to the agency head. COOs provide leadership such as overseeing efforts to improve financial management, which includes reducing improper payments. The agency CFO oversees the financial management activities relating to agency programs and operations. CFOs are responsible for providing complete, reliable, and timely financial information and for developing and maintaining integrated financial management and accounting systems related to financial reporting and internal controls. The information prepared by the CFO includes internal management reports and agency financial reports. Agency officials responsible for managing and controlling program operations need reliable and timely financial information, including improper payment data, to make operating decisions, monitor performance, and allocate resources. CFOs may identify and incorporate estimated improper payment disclosures into their agencies’ annual financial reports, which could promote transparency and help establish accountability. In addition, CFOs may be required to provide significant input for agency efforts in developing the improper payment information required by the recent revisions to OMB Circular A-11. CIOs are responsible for managing agency information technology resources of their agencies. In addition to developing new systems, CIOs evaluate and monitor existing systems to determine if they meet agency needs. Many of the techniques for detecting improper payments, such as data sharing and data mining, rely on computerized information systems. Agencies’ computer-related activities must also be consistent with all protections of the Privacy Act of 1974, as amended by the Computer Matching and Privacy Protection Act of 1988, and other privacy statutes. Furthermore, inadequate computer systems can have a serious impact on agency efforts to minimize improper payments since agencies use a wide range of computer-assisted activities to address improper payments. These activities range from simple comparative analysis (e.g., comparing beneficiaries with mortality rolls) to sophisticated computer models for interactive analysis of large amounts of information. Furthermore, organizations use computer-generated information to obtain, summarize, and communicate information needed to evaluate program performance. When performing audits and investigations, OIGs develop information on and an understanding of agency internal control systems and detect fraud and errors involving agency programs and activities. OIG audits have historically identified instances of improper payments within agency programs. For example, the HHS OIG identified $11.9 billion in overpayments for services in the Medicare fee-for-service program in fiscal year 2000 by selecting a sample of payments to providers and then reviewing the medical records that supported these payments. In addition, at the Department of Labor, an OIG investigation found that a claimant created 13 fictitious companies and submitted Unemployment Insurance claims for 36 fictitious claimants. Program managers are the agency’s first line of defense against improper payments. They manage their respective programs on a day-to-day basis and are the principal federal points of contact for program participants, such as state and local governments, that administer billions of dollars in federal program and grant funds annually. In performing their responsibilities to ensure that their respective programs operate as intended, they should become aware of the extent and causes of improper payments in their programs. Although the various offices cited above have different missions and areas of responsibility, they must work together and contribute to the successful management of improper payments. Central leadership within the agency is necessary to coordinate and consolidate the knowledge, skills, and abilities of these diverse entities. The COOs appear to be the logical choice to lead this effort due to the central management role played by this position within each federal agency. Identifying, measuring, preventing, and collecting improper payments are continuing processes for which interagency cooperation can identify practices and procedures that may prove effective governmentwide. As the President’s agent for managing and implementing policy, OMB issues guidance and oversees the administrative organization and operations of federal agencies. OMB’s staff draws on experience in many areas of government to challenge the thinking of other agencies, which often cannot see beyond their own programs. To promote information sharing across agencies, OMB leads and participates in interagency groups, such as the President’s Management Council (PMC), the Chief Financial Officers Council (CFOC), the Chief Information Officers Council (CIOC), and the President’s Council on Integrity and Efficiency (PCIE). These councils, which are further described below, are good sources of best practice information for both agencies and OMB to draw on when developing guidance on improper payment issues. OMB’s role in managing, implementing, and overseeing governmentwide administrative policy, its interagency perspective, and its leadership role on the various interagency councils make it a key player in the government’s effort to reduce improper payments. The following table summarizes the agencies that are members of each council. Based on its charter, the PMC’s membership consists of the Deputy Director of OMB, the Director of OPM, the COOs from the agencies listed in table 3, and other officials. Some of PMC’s responsibilities include implementing the President’s Management Agenda, Fiscal Year 2002, coordinating management-related efforts to improve government throughout the executive branch, resolving interagency management issues, ensuring the adoption of new management practices in agencies, and identifying and sharing examples of best management practices. PMC also seeks advice and information, as appropriate, from federal agencies and considers the management reform experiences of corporations, nonprofit organizations, state and local governments, government employees, public sector unions, and customers of government services. The CFOC was established under the provisions of the CFO Act of 1990 to improve financial management in the federal government. Its membership consists of the CFOs and deputy CFOs of the largest agencies along with the senior officials of OMB and Treasury, and it is chaired by the Deputy Director for Management, OMB. The CFOC recently established an Erroneous Payments Committee. The committee convenes to discuss and develop methods to address improper payments made by federal agencies. The CIOC was established in July 1996 by Executive Order 13011 as a governmentwide body to address crosscutting information technology issues. CIOs and deputy CIOs of the 28 largest federal agencies, two CIOs representing the smaller federal agencies, and other OMB and advisory members, make up the council’s membership, under the leadership of OMB’s Deputy Director for Management. The council was established to improve agency practices on information technology matters such as the design, modernization, use, sharing, and performance of agency information resources. It also facilitates intergovernmental approaches for using information resources to support common operational areas such as reducing improper payments. For example, it could assist interagency efforts to compare payment information to ensure that initial eligibility of individuals for benefits is determined correctly or to determine whether improper payments have already been made. The PCIE primarily consists of the presidentially appointed IGs and is chaired by the Deputy Director for Management of OMB. Its mission includes addressing integrity, economy, and effectiveness issues that transcend individual government agencies. The council conducts interagency audits, inspections, and investigations to promote economy and efficiency in federal programs and operations, and addresses governmentwide issues of fraud, waste, and abuse, including improper payments. PCIE and CFOC have recently established a joint working group to address improper payments. The working group is carrying out several tasks, including preparing a report that defines its position on mitigating and managing preparing a critique on the effectiveness of the differing processes used to determine improper payment rates; preparing a set of indicators that can be used to effectively represent the nature and extent of the problem of improper payments; preparing guidance to ensure sufficient oversight and monitoring, and adequate eligibility controls and automated systems for agencies experiencing improper payment problems; and developing a proposal on funding the administrative costs associated with activities related to improper payments. Within these groups, OMB draws together operational, financial, information technology, procurement, and other experts from across the government to establish governmentwide goals in their areas of expertise and to marshal the resources within individual agencies to improve government performance. By drawing together representatives from these various councils, OMB can provide leadership and build on council members’ combined knowledge, skills, and abilities and work with them to develop systems and perform other actions to reduce improper payments. Collectively, these organizations can achieve more than they can by working alone. The Congress can further agency efforts to reduce improper payments by using its appropriation, authorization, and oversight responsibility to continue to demonstrate a leadership role and by helping to ensure that agencies are held accountable for meeting performance goals. The Congress reviews and determines federal financial priorities. Through the appropriations process, it has the opportunity to review recent expenditures in detail. Specifically, the Congress can use its appropriations authority to assist agencies in setting financial priorities that support identifying, reducing, and collecting improper payments. For example, the SSA’s fiscal year 2003 budget proposes $1.05 billion for ensuring that only those who remain disabled continue receiving benefits and for assessing whether SSI recipients continue to meet the financial eligibility requirements. In considering this budget request, the Congress can help set priorities and expectations for specific program outcomes. The Congress also reviews the actions taken and regulations formulated by departments and agencies to make certain that program officials execute laws according to congressional intent. Therefore, it can determine whether the public’s needs are adequately served by federal programs, and thus lead corrective action through legislation or administrative changes. For example, in the Budget of the United States Government, Fiscal Year 2003, the President proposes a legislative change to allow IRS to match the income reported on student aid applications with tax return data. According to the budget, this action could help reduce improper payments in the Department of Education’s student aid programs, resulting in an estimated $138 million savings in 2003. Congressional oversight committees investigate alleged instances of poor administration and fraud, waste, and abuse that could result in improper payments in federal programs. On July 9, 2002, the House of Representatives passed the “Improper Payments Information Act of 2002” (H.R. 4878). This legislation is currently at the Senate for its consideration. This bill requires more stringent requirements in the areas of improper payment review and reporting than is currently required by the President’s Management Agenda, Fiscal Year 2002, and OMB Circular A-11. Specifically, it requires that agency heads review all programs and activities that they administer, identify those that may be susceptible to improper payments, estimate the annual amount of improper payments, and, where estimated improper payments exceed the lesser of 1 percent of the total program budget or $1,000,000 annually, report on actions the agency is taking to reduce improper payments. On the other hand, the President’s Management Agenda, Fiscal Year 2002, and OMB Circular A-11 apply only to large-dollar programs. Further, most federal agencies and programs are under regular and frequent reauthorizations. As a consequence of these oversight efforts, the Congress can abolish or curtail obsolete or ineffective programs by cutting off or reducing funds. Conversely, the Congress may enhance effective programs by increasing funds or reducing legislative barriers to agency actions to better control improper payments. The extent of governmentwide improper payments is not known but is likely to be billions of dollars more than the approximately $19 billion to $20 billion reported annually in agency financial statements over the past 3 years. Current requirements and guidance do not require or offer a comprehensive approach to measuring improper payments, developing and implementing corrective actions, or reporting on the results of the actions taken. Measuring improper payments and designing and implementing actions to reduce or eliminate these payments are not simple tasks. However, as evidenced by the actions taken by USDA, HUD, HHS, and SSA, federal agencies can perform them and these actions can result in reductions in improper payment rates. Determining payment error rates is important to ensure program integrity. In addition, the administration and the Congress have taken important steps to address improper payments. For example, the President’s Management Agenda, Fiscal Year 2002, and OMB’s revisions to Circular A-11 demonstrate the administration’s interest in and plans to address improper payments across the government. Both documents call for OMB to work with agencies to establish goals and action plans to reduce improper payments. The agenda and the revisions to the circular are important first steps. The administration must now take all necessary actions to ensure that federal agencies meet the requirements set forth in those documents. In addition, through legislation, the Congress has provided resources for anti-fraud and abuse activities and agencies with the authority to impose penalties and take actions to keep dishonest recipients from further program participation. Legislative initiatives such as these are critical to governmentwide actions to reduce improper payments and demonstrate that the Congress is willing to take actions to address improper payments. As stated in the Budget of the United States Government, Year 2003, “The Administration cannot improve the federal government’s performance and accountability on its own. It is a shared responsibility that must involve the Congress.” Few agencies publicly report improper payment information such as improper payment rates, causes, and strategies for better managing their programs to reduce or eliminate these payments. This is evidenced by the fact that publicly available documents such as annual agency financial statements and the performance plans required by GPRA contain minimal information on the extent of improper payments, the actions taken by agencies to address them, and the impact or results of those actions on improper payment levels. OMB Circular A-11 requires that 16 agencies report improper payment information, including error rates and target rates for improvement, but that information is not publicly reported and, therefore, the Congress, the public, and others with oversight and monitoring interests cannot use this information to hold agencies accountable for achieving target rates or otherwise implementing specifically planned actions. On a case-by-case basis, agencies’ abilities to control improper payments can be hindered by legislative, program design, and resource barriers. These barriers can hamper the design and implementation of actions to prevent, detect, and mitigate improper payments. Reducing or eliminating some of these barriers may not be feasible without legislative or program design changes that could significantly alter federal program missions or the methods used to achieve the program goals and objectives established by the Congress and the administration. Yet it must be recognized that, barring actions in these areas, these barriers will continue to restrict an agency’s ability to address all of its improper payment problems. As we noted in our report on strategies for managing improper payments, significant progress in minimizing improper payments can only occur as a collaborative governmentwide effort. The government’s reduction of improper payments will only be achieved as a result of the design, development, and implementation of better internal controls. These efforts will require strong support and active involvement from agency management, the administration, and the Congress. Once committed to a plan of action, all parties must remain involved and committed to the end goals and their support must be transparent to all. Agency management, the administration, and the Congress must work together to identify and implement effective controls to reduce improper payments. The mechanisms already exist for this to happen. Agency experts in financial matters, information systems, and general management issues; governmentwide councils under OMB’s direction; and the Congress each provide valuable resources that could be useful in addressing the government’s improper payment problems. Individually, each can have an impact; collectively, they can achieve more by sharing experiences and practices and working together to address improper payment problems. The head of each CFO Act agency should assign responsibility to a senior official, such as the COO or the CFO, for establishing policies and procedures for assessing agency and program risks of improper payments, taking actions to reduce those payments, and reporting the results of the actions to agency management for oversight and other actions as deemed appropriate. These responsibilities should include, but not be limited to developing detailed action plans to determine the nature and extent of possible improper payments for all agency programs and/or activities spending federal funds; identifying cost-effective control activities to address the identified risk assigning responsibility for specific areas of improper payment-related activities to appropriate program or activity officials; establishing improper payment goals or targets and measuring performance against those goals to determine progress made and areas needing additional actions; developing procedures for working with OMB and the Congress to address barriers encountered that inhibit actions to reduce improper payments; and periodically reporting, through publicly available documents, to the agency head, OMB, and the Congress on the progress made in achieving improper payment reduction targets and future action plans for controlling improper payments. We recommend that the Director of OMB take the following actions. Develop, as a result of interactions with agency officials and through participation on interagency groups, information on lessons learned and best practices that federal agencies have used to address their improper payment problems. Once developed, OMB should issue specific guidance, as we have previously recommended, to agencies that provides a comprehensive approach to reducing improper payments, including providing the transparency in reporting that is crucial to addressing this problem. Work with agency officials to provide all reasonable assistance in implementing the corrective action plans developed to reduce improper payments. Work with agency officials to identify and help eliminate or reduce, to the extent practicable, the barriers that restrict agency actions to reduce improper payments. OMB should work with the agencies in clearly defining and evaluating these barriers and in assisting agencies in eliminating them. Work with the Congress to identify and develop actions to reduce or eliminate, to the extent practical, barriers that hinder agency actions to reduce improper payments. Require federal agencies to report the information called for by OMB Circular A-11 on improper payments in a specific, publicly available document such as annual performance reports, annual agency financial statements, or other annual report. All agencies should report this information in the same document to facilitate oversight and monitoring by interested parties including the Congress and the public. The Congress should consider using available improper payment information to engage agencies in discussions about progress that is being made, additional steps planned, and actions the Congress can take to help reduce improper payments. When, based on these discussions, the congressional actions necessary to eliminate barriers to agency corrective action are identified, the Congress should consider taking the legislative and oversight actions necessary to provide the agencies and the administration with tools needed to reduce improper payments, both at the agency and governmentwide levels. In commenting on this report, HHS, HUD, SSA, and OMB noted that they had actions in progress or that were completed that addressed our recommendations or that agency units supported the essence of the topics covered by the report. Each of these organizations and USDA also provided technical comments and other editorial suggestions for our consideration. We considered all comments and made changes to the report, as appropriate. HHS, HUD, and SSA provided written comments to our draft report. USDA provided comments via e-mail and OMB provided its comments orally. (The written comments from HHS, HUD, and SSA are reprinted in appendixes IV through VI, respectively.) In oral comments, OMB generally agreed with the report’s findings. OMB also stated that it believes its current focus on improper payments will address the majority of the concerns the report raises. OMB considers the recommendations in the report to already be in place, since the President has made addressing and reducing improper payments a priority in his management agenda and the Chief Financial Officers Council has established an Erroneous Payments Committee to address the problem. The President’s focus on improper payments, OMB’s leadership in this area, and the administration’s efforts to date are positive steps to ultimately addressing the serious problems in this area. At the same time, agencies still face significant challenges in identifying and measuring their improper payments, setting performance goals, implementing corrective actions, and reporting the results against the goals. Fully implementing our recommendations will be important to addressing the underlying internal control problems agencies face in reducing improper payments. In written comments (reprinted in app. IV) HHS stated that CMS is already implementing the recommendations of the report and is in the process of designating a senior official to oversee the identification, correction, and reporting of improper payments, as we recommended. Furthermore, CMS has undertaken a number of efforts to better manage all of its financial management systems. The comments also suggested technical revisions and clarifications, which we considered and included in the report, where appropriate. HUD generally agreed with the report’s conclusions and recommendations. Its comments (reprinted in app. V) stated that strengthening management controls and reducing improper payments are priorities for HUD’s administration. HUD further indicated, that, as acknowledged in the draft report, it has already initiated corrective actions to strengthen management controls and reduce improper payments in the rental housing assistance program area. Its comments also identified several revisions and technical or editorial issues. We considered these issues and included them in the report, as appropriate. SSA’s comments (reprinted in app. VI) stated that each of its components, directly or indirectly, supports the essence of the topic of our report— reducing improper payments. Its efforts involve collaboration between SSA components, data match partners, OMB, and the Congress. The comments also noted that the Deputy Commissioner of SSA (Chief Operating Officer) has overall responsibility for addressing the responsibilities outlined in our recommendations to the federal agencies. They also provided information on the improper payment efforts of SSA units other than those included in our review and provided suggested revisions and clarifications to the report. We considered these suggestions and included them in the report, as appropriate. USDA responded via e-mail. The comments provided several editorial and/or clarification points which we considered and included in the report, as appropriate. We are sending copies of this report to the Chairman, Senate Committee on Governmental Affairs, and the Chairmen and Ranking Minority Members of the House Committee on Government Reform, Senate Committee on the Budget, and House Committee on the Budget. We will also send copies to the Director of the Office of Management and Budget and the heads of the CFO agencies and components required to prepare financial statements and their respective agency CFOs and OIGs. Copies will also be made available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Sally E. Thompson, Director, Financial Management and Assurance, who may be reached at (202) 512-9450 or by e-mail at thompsons@gao.gov if you or your staff have any questions. Staff contacts and other key contributors to this report are listed in appendix VII. Department of Housing and Urban Development Low Income Public Housing Section 8 Tenant Based Section 8 Project Based Community Development Block Grants (Entitlement Grants, States/Small Cities) The following lists the GAO products that addressed the status of CFO Act agency actions to achieve key outcomes and address major management challenges. U.S. General Accounting Office. U.S. Agency for International Development: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-721. Washington, D.C.: August 17, 2001. U.S. General Accounting Office. Department of Agriculture: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-761. Washington, D.C.: August 23, 2001. U.S. General Accounting Office. Department of Commerce: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-793. Washington, D.C.: June 15, 2001. U.S. General Accounting Office. Department of Defense: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-783. Washington, D.C.: June 25, 2001. U.S. General Accounting Office. Department of Education: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-827. Washington, D.C.: June 29, 2001. U.S. General Accounting Office. Department of Energy: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-823. Washington, D.C.: June 29, 2001. U.S. General Accounting Office. Environmental Protection Agency: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-774. Washington, D.C.: June 15, 2001. U.S. General Accounting Office. Federal Emergency Management Agency: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-832. Washington, D.C.: July 9, 2001. U.S. General Accounting Office. General Services Administration: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-931. Washington, D.C.: August 3, 2001. U.S. General Accounting Office. Health and Human Services: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-748. Washington, D.C.: June 15, 2001. U.S. General Accounting Office. Department of Housing and Urban Development: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-833. Washington, D.C.: July 6, 2001. U.S. General Accounting Office. Department of the Interior: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-759. Washington, D.C.: June 15, 2001. U.S. General Accounting Office. Department of Justice: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-729. Washington, D.C.: June 26, 2001. U.S. General Accounting Office. Department of Labor: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01- 779. Washington, D.C.: June 15, 2001. U.S. General Accounting Office. NASA: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-868. Washington, D.C.: July 31, 2001. U.S. General Accounting Office. National Science Foundation: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-758. Washington, D.C.: June 15, 2001. U.S. General Accounting Office. Nuclear Regulatory Commission: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-760. Washington, D.C.: June 29, 2001. U.S. General Accounting Office. Office of Personnel Management: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-884. Washington, D.C.: July 9, 2001. U.S. General Accounting Office. Small Business Administration: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-792. Washington, D.C.: June 22, 2001. U.S. General Accounting Office. Social Security Administration: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-778. Washington, D.C.: June 15, 2001. U.S. General Accounting Office. Department of State: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-02- 42. Washington, D.C.: December 7, 2001. U.S. General Accounting Office. Department of Transportation: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-834. Washington, D.C.: June 22, 2001. U.S. General Accounting Office. Department of the Treasury: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-712. Washington, D.C. June 15, 2001. U.S. General Accounting Office. Veterans Affairs: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-752. Washington, D.C. June 15, 2001. In addition to those named above, the following individuals made important contributions to this report: David Elder, Bonnie McEwan, and Tarunkant Mithani. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | This report discusses (1) the amount of improper payments reported in agencies' fiscal year 2000 financial statements, (2) the extent to which agencies' fiscal year 2002 performance plans address improper payments, (3) the extent to which the Office of Management and Budget (OMB) has implemented previous GAO recommendations in this area, and (4) other actions that might encourage agencies to better report improper payments. Of the 15 agency performance plans GAO reviewed, only 4 comprehensively addressed any of the Government Performance and Results Act requirements for evaluating the effectiveness of federal programs and the resources spent on them. GAO found that improper payments often result from a lack of, or breakdown in, internal controls. This report also contains recommendations for agencies to assign responsibilities to minimize improper payments and for OMB to assist agencies in identifying and implementing corrective actions. |
The judiciary and GSA are responsible for managing the multibillion-dollar federal courthouse construction program, which is designed to address the judiciary’s long-term facility needs. AOUSC, the judiciary’s administrative agency, works with the nation’s 94 judicial districts to identify and prioritize needs for new and expanded courthouses. Since fiscal year 1996, AOUSC has used a 5-year plan to prioritize new courthouse construction projects, taking into account a court’s need for space, security concerns, growth in judicial appointments, and operational inefficiencies that may exist. The Design Guide specifies the judiciary’s criteria for designing new court facilities and sets the space and design standards for courthouse construction. First published in 1991, the Design Guide has been revised several times to address budgetary considerations, technological advancements, and other issues. GSA has been using AOUSC’s 5-year plan since fiscal year 1996 to develop requests for both new courthouses and expanded court facilities. GSA also prepares feasibility studies to assess various courthouse construction alternatives and serves as the central point of contact with the judiciary and other stakeholders throughout the construction process. For courthouses that are selected for construction, GSA prepares detailed project descriptions called prospectuses. The prospectus includes the justification, location, size, and estimated cost of the new or annexed facility. GSA typically submits two prospectuses to Congress to request authorization and funding. The first prospectus, often called the site and design prospectus, outlines the scope and estimated costs of the project at the outset and typically requests authorization and funding to purchase the site for and design of the building. The second prospectus, often called the construction prospectus, outlines the scope and estimated costs of the project as it enters the construction phase and typically requests authorization and funding for construction, as well as additional funding if needed for site and design. At the request of Congress or when additional authority and funding are required, GSA may also provide additional prospectuses or fact sheets that contain the project’s estimated total cost. GSA requests funding for courthouses as part of the President’s annual budget request to Congress. Once Congress authorizes and appropriates funds for the project, GSA refines the project budget and selects private-sector firms for the design and construction work through a competitive procurement process. GSA also manages the construction contract and oversees the work of the construction contractor. If disputes arise between GSA and the contractor that cannot be resolved, the contractor has the option of filing a claim against the federal government. Figure 1 illustrates the process for planning, approving, and constructing a courthouse project. GSA and the judiciary have implemented a number of initiatives since fiscal year 1995 to improve the management of the courthouse construction program. These initiatives are consistent with leading practices that we have recognized in prior reports, including the use of project management tools and communication with stakeholders. To improve comprehensive planning, the judiciary implemented an annually updated 5-year plan to prioritize its courthouse projects and revised its Design Guide to include new criteria intended to encourage cost consciousness. In 1995, GSA established the Courthouse Management Group, which was reorganized in 2003 as the Center for Courthouse Programs (CCP), to serve as a central point of contact for the judiciary, GSA’s field offices, the Office of Management and Budget (OMB), and Congress. CCP’s responsibilities include reviewing and finalizing prospectuses before they are submitted to OMB, developing cost benchmarks and comparing new projects’ cost estimates with these benchmarks, and determining whether proposed courthouse designs conform to the Design Guide’s standards. GSA also established three programs—the Project Management Center of Expertise and the Design and Construction Excellence programs—to share project management innovations and provide opportunities for peer review during the design and construction phases. To provide for accountability and oversight throughout a project, GSA uses a benchmarking system at the start of the design process to develop the first estimate of the project’s construction cost. This system computes the estimated cost of the building by comparing it to similarly sized courthouses and adjusts for differences in local market conditions and the number of years expected to complete the project. The benchmark is used to estimate the construction costs that will be submitted in the prospectus. To help ensure that courthouse projects can be built within authorized budgets, GSA develops independent cost estimates for each new courthouse at three milestone dates—during the preliminary planning, design development, and construction document phases. GSA also facilitates stakeholders’ involvement, another recognized leading practice, by encouraging regular partnership meetings between the judiciary and GSA and by using courtroom mock-ups to encourage greater judicial feedback on the design of the courthouse facilities. Since the courthouse construction program began in the early 1990s, budgetary constraints faced by GSA and the courts have affected the program’s progress, putting some planned courthouse construction projects on hold for extended periods. In response to recommendations by the 1993 National Performance Review, GSA initiated a “time-out and review” of all prospectus-level new construction projects, including courthouse projects, in 1993 and 1994. During this time-out and review, GSA reevaluated the costs of new construction projects to ensure that proposed projects were justified and cost effective and that alternatives had been adequately considered. Funding requests for courthouse projects were not included in the President’s budget in 4 of the last 10 fiscal years (1998, 1999, 2000, and 2004). Congress did not provide funding for courthouse projects in fiscal years 1998 and 2000. Most recently, in September 2004, the Judicial Conference adopted a 2-year courthouse construction moratorium on planning, authorizing, and budgeting courthouse construction projects. This moratorium affects 42 out of the 57 projects listed on the judiciary’s 5-year plan. According to judiciary officials, the moratorium was necessary to seek remedies for its own budgetary shortfalls, resulting in part from the increase in the total rent it pays to GSA for the space it occupies. According to the judiciary, rent currently accounts for just over 20 percent of its operating budget and is expected to increase to over 25 percent of its operating budget in fiscal year 2009 when the costs of new court buildings already under way are included. During this moratorium, AOUSC officials said that they plan to reevaluate the courthouse construction program, including reassessing the size and scope of projects in the current 5-year plan, reviewing the Design Guide’s standards, and reviewing the criteria and methodology used to prioritize projects. Judiciary officials also said that they plan to reevaluate their space standards in light of technological advancements and opportunities to share space and administrative services. The actual costs for courthouse construction projects completed since fiscal year 1998 varied from the estimates provided to Congress at the design and construction phases. As expected, the variation was greater, on average, for the design phase estimates than for the later, more refined construction phase estimates. For many projects, the estimated cost and proposed building size changed between the design and construction phases, but GSA often did not indicate that these changes had occurred or explain the reasons for them in the prospectuses and fact sheets it supplied to Congress. As shown in figure 2, GSA provided Congress with at least two separate total cost estimates for 24 of the 38 projects. Of the remaining 14 projects, GSA provided Congress with a single, total-cost estimate for 11 of the projects and provided no formal estimate prior to the appropriation of funds for 3 of the projects. The single estimates were for design-build projects, for which all funding was requested at one time, or for projects for which GSA did not provide a total project cost estimate when requesting funds for design. In all, for the 38 projects we reviewed, GSA provided Congress with a total project cost estimate at the design phase for 27 projects and at the construction phase for 32 projects. The project cost estimates that GSA provides to Congress typically include all costs associated with acquiring the site and designing and constructing the courthouse. These estimates do not include estimates for items that the tenant agencies fund for the new courthouse, such as space alterations above the standard normally provided by GSA. Estimate provided at construction for 32 projects. For the 27 projects that had a total cost estimate provided to Congress at the design phase, the actual cost, including claims, exceeded the estimate by an average of 17 percent and ranged from 23 percent below to 115 percent above the estimate. The actual cost compared more favorably with the estimate at the construction phase, which is provided to Congress an average of 2 years after the initial design phase estimate. This improved accuracy is expected because more information is available to estimate the cost of the project as its design moves forward and becomes more fully defined. For the 32 projects that had a total cost estimate provided to Congress at the construction phase, the actual cost exceeded the estimate by an average of 5 percent and ranged from 25 percent below to 52 percent above the estimated cost. The actual cost exceeded this estimate by more than 10 percent for 9 of these 32 projects. The construction industry commonly uses 10 percent as a benchmark for the expected variance between the actual cost and the construction estimate. Figure 3 illustrates the numbers of projects whose actual costs fell short of or exceeded estimates at both the design and construction phases. See appendix II for additional details on the estimated project costs provided to Congress and the actual costs for all projects we examined. The Public Buildings Act of 1959, as amended, requires GSA to seek approval from Congress if the estimated maximum expenditures for a project exceed the amount appropriated for the project by more than 10 percent. We found that GSA obtained approval from Congress for those projects where the estimated cost exceeded appropriated amounts by more than 10 percent. Project cost estimates and proposed building sizes often changed between GSA’s submissions to Congress of the design and construction phase documents. According to prospectuses and fact sheets provided to Congress, 29 of the 38 projects experienced changes in cost, building size, or both after the initial estimated project cost at design was submitted. As shown in figure 4, for 16 projects, both the proposed building size and the estimated project cost changed between the design phase and the construction phase. For another 7 projects, only the cost estimate changed, and for the other 6 projects, the building size changed; but only one estimate was provided to Congress, so we could not determine if the estimated cost changed. The building size changes ranged from small additions or subtractions of tenant space to substantial changes in overall square footage. For example, the proposed size of the Hammond, Indiana, courthouse increased by nearly two-thirds because the need for additional space was identified during a long-range planning process initiated after the initial funding request was submitted to Congress. By contrast, the proposed size of the Omaha, Nebraska, courthouse was reduced by approximately 11 percent after funds were requested for design because of a re-evaluation of construction projects completed as part of GSA’s time-out and review process. As we have explained in a previous report on funding capital projects, an important factor reinforcing the decision-making process is the availability of good information. Although changes in the proposed building size and estimated cost of a project during its design phase are not unexpected, GSA did not consistently identify or explain project changes in the prospectuses and fact sheets it submitted to Congress. For 17 of the 29 projects that changed after the design phase, no description or explanation of the change was provided in later construction phase documents submitted to Congress. In some cases, significant changes in building size, estimated cost, or both were not explained. For example, a comparison of documents submitted at the design and construction phases for the Jacksonville, Florida, courthouse shows a total estimated cost increase of over $11 million (13 percent) and an increase in total building size of approximately 9,000 square feet (2 percent), yet these changes are not described or explained. Similarly, an increase in the estimated unit construction cost of $55 per square foot—which increased the estimated total costs by over $3 million (11 percent) for the Greeneville, Tennessee courthouse—was not explained in the fact sheet provided to Congress. By contrast, GSA fully explained the reasons for a nearly 40-percent decrease in the proposed size of the Youngstown, Ohio, courthouse, along with a 27-percent decrease in the project’s estimated cost. For the Tucson, Arizona, courthouse, GSA submitted a fact sheet describing numerous changes made in building size and estimated costs since the initial funding request, but it did so in response to a congressional staff request. For the seven projects we reviewed in detail, changes in scope and the postponement of planned construction start dates resulted in differences between estimated and actual project costs. Several factors contributed to changes in scope, including issues associated with site selection, historic preservation requirements, changes in tenants’ requirements, and the need for additional security after the Oklahoma City bombing. Depending on circumstances unique to each project, some changes increased, while other changes decreased, the project’s total costs. Postponing the start of construction for five of the seven projects increased their cost because of inflation, since GSA’s project cost estimates are based on an expected construction start date. The actual costs for the seven projects we reviewed in detail varied from 5 to 56 percent above the cost estimates provided to Congress at the design phase and from 2 percent below to 25 percent above the cost estimates provided at the construction phase. Table 1 compares the estimated costs with the actual costs for these seven projects. For each of the seven projects we reviewed, the scope changed and contributed to differences between the estimated cost provided to Congress and the actual cost. The term “scope” refers both to building size and to the amount of work or number of tasks required to complete the project. Factors that caused changes in scope included site selection issues, the need to address historic preservation requirements, changes in tenants’ requirements, and the need for additional security provisions. Although some scope changes changed both the building size and the amount of work to be done, other scope changes, such as those necessary to comply with historic preservation requirements and certain improvements requested by tenants, increased only the amount of work to be done. Table 2 identifies the factors that contributed to changes in scope for each of the projects we reviewed. Difficulties with finding and acquiring a site for a new courthouse increased the scope of two projects, adding to their costs. The scope of the Gulfport project increased when GSA faced community resistance to the preferred site and had to purchase a larger site and close a street to accommodate the new courthouse. According to GSA officials and the project files we reviewed, procuring the only site that was suitable and acceptable to the community required GSA to purchase more land than it had planned in order to accommodate the preservation of a historic high school building that was on part of the property. The site cost $3.63 million, or 94 percent, more than GSA had planned when it submitted its design funding request. The courthouse project included the preservation of an adjacent 1920s historic structure to form a courthouse campus. The historic structure houses the U.S. Attorneys and U.S. Probation Offices. In Seattle, GSA had to redesign the courthouse to include three more courtrooms when it could not locate the new courthouse adjacent to the existing courthouse as planned. Under the new plan, the circuit courtrooms remained in the existing courthouse building and the bankruptcy courts were included in the new building. This change was required after GSA was unable to reach an agreement with the city of Seattle on relocating the city library, which was located on the preferred site. The scope of two courthouse projects increased to provide for historic preservation work that GSA had not anticipated when it requested design funding for the projects. The original design concept for the Erie courthouse project called for the preservation and incorporation of a historic public library building into the courthouse design. According to GSA officials and the project files we reviewed, additional preservation work was required when an old clothing store on the site became eligible for historic status. Rather than demolish the store as originally planned, GSA incorporated it into the project design. This decision increased the project’s total cost by about $1.3 million. Procuring the Gulfport project site, as discussed above, was contingent on preserving a historic high school. This requirement increased the scope of work for both the design and construction phases because, as shown in figure 5, three of the old school’s four exterior walls had to be preserved. According to our analysis of GSA data, preserving the exterior walls and retrofitting a new structure within the walls of the old school increased the project’s design costs by 14 percent. Changes in tenants’ space requirements increased the scope of work for three of the projects we reviewed. The U.S. Marshals Service (USMS) provides security for the federal judiciary, including physical protection of courthouses and prisoner transport, and was a tenant in each of the courthouses we reviewed. The U.S. Attorneys Offices are also often located in courthouse facilities. In Cleveland and Seattle, the U.S. Attorneys Offices initially resisted relocation to the new courthouses because they preferred their current leased spaces. In addition, for the Seattle U. S. Attorneys, there were questions of whether they would have to move again at a later date as the courts’ space needs grew. In Denver, the USMS revised its plans for the amount of office space it would occupy in the new courthouse. For these three projects, GSA had to redesign space to meet the tenant agencies’ needs. According to GSA’s project manager, to preserve the Cleveland project’s schedule, the project moved forward after an agreement could not be reached with the U.S. Attorneys Office on the design of its space. The U.S. Attorneys determined that the original space was not large enough and received authorization from the Department of Justice for additional space in the courthouse. This change required five floors of the courthouse to be redesigned to meet the U.S. Attorneys Office’s requirements. In Seattle, according to the GSA project manager, the redesign effort was minimized because GSA anticipated the inclusion of the U.S. Attorneys in the courthouse and included an option in the construction contract to build out the required space. In Denver, the USMS revised its occupancy plan for the Denver courthouse during the design phase, prompting a redesign effort. The USMS decided to occupy office space in the new courthouse rather than remain in the existing, adjacent courthouse as planned. According to the GSA project manager, this change in the tenant’s requirements led to redesigning and allocating most of the third floor to the USMS. The courthouse is located on the edge of Cleveland's downtown commercial district and overlooks the Cuyahoga River. The courthouse is easily accessible to the public by an indoor pedestrian walkway that connects the courthouse to a local transit station and shopping mall. GSA now has a policy to obtain signed agreements from the tenant agencies specifying how much space they will occupy in a new building before construction begins. These agreements, called occupancy agreements, also specify the rent that the agencies will pay for their space. According to the project managers we spoke with, the occupancy agreements have helped tenant agencies understand the rent commitments they are entering into and have helped GSA resolve occupancy issues before starting construction. Enhancements made to building security required scope changes for four of the seven projects we reviewed. According to the GSA project managers for the Denver and Albany projects, these enhancements were made in response to the 1995 Oklahoma City bombing and reflected updates to the U.S. Marshals’ Design Guide. Thus, additional security features were added to those projects that were in design or under construction between 1995 and 1999. Changes to the design criteria for federal buildings increased the scope of work for the Las Vegas courthouse project and resulted in a claim against the project. This was the first federal courthouse designed after the Oklahoma City bombing. For security purposes, the building was reinforced with additional steel, increasing the project’s costs. In addition, because this was a new security design, the contractor had not correctly anticipated the amount of steel that would be needed and filed a claim to recoup the cost of the additional steel. According to the GSA project manager, the project’s budget was increased by $4.7 million to meet the enhanced security requirements; and after the construction was complete, the contractor was paid $3.2 million to settle his claim for the additional work and materials associated with blast proofing the exterior walls. Changes made to the U.S. Marshals’ Design Guide increased the costs of projects in Cleveland, Albany, and Denver. Among other things, these changes modified the type of materials used in the prisoners’ holding cells. Postponing the start of construction and changes in local market conditions contributed to changes in costs for five of the seven projects we reviewed. GSA had to postpone its schedule for starting construction on five projects. Of these five projects, two were built in highly competitive local construction markets whose volatility also contributed to increases in the projects’ costs. Local market conditions are driven by the supply of skilled construction labor, materials, and the relative number of construction projects within a locality. Courthouse construction takes place in a dynamic and constantly changing economic environment. Postponing construction schedules exposes a project to cost changes caused by annual inflation or deflation rates and increases the risk that the assumptions used to establish the project’s budget may not keep pace with changing local market conditions. Yet, even if construction is not postponed, the 2 years that typically elapse between the development of a prospectus and the actual funding of a project provide ample time for local market conditions to drift from the conditions assumed in developing the estimates in the first place. Thus, postponing construction schedules for reasons as diverse as the timing of appropriations or the judiciary’s current moratorium increase the probability that estimated and actual costs will diverge. The Erie project illustrates the effect that not receiving funding when anticipated and postponing construction can have on a project’s costs. The design prospectus for the Erie project was submitted in March 1994. When a fact sheet was submitted in March 1999—5 years after the prospectus—the design concept had changed, as discussed earlier, increasing the scope of historic preservation work and adding to the design costs. Furthermore, appropriations for construction funding were not provided until fiscal year 2002. Primarily because of inflation and the scope increase, the project’s estimated total cost increased 59 percent in nominal dollars over the estimate provided in the 1994 prospectus. Construction on two projects, Gulfport and Seattle, were postponed as a result of site acquisition issues, as discussed earlier. In addition, according to the GSA project manager, the booming local construction market in Seattle contributed to increased project costs. The Seattle project also illustrates the uncertainty involved in anticipating local market conditions. GSA’s benchmark used an escalation factor of 3 percent to estimate construction costs, but the project manager said that the escalation in Seattle was closer to 3.9 percent. According to GSA’s project manager, the Denver courthouse also was constructed in a highly competitive economic environment that increased the project’s cost. During the project’s development, the project manager said that Denver experienced a construction boom that caused construction prices to rise sharply and contributed to construction bids for the project that came in approximately $10 million over budget. Although one floor was removed from the design and other cost-saving measures were implemented, the persistent, ongoing competition in the local construction market contributed to actual costs that were 6 percent higher than the estimated costs submitted with the construction funding request. Other factors that were unique to specific projects we reviewed also caused costs to change. For example, costs increased for the Denver project when GSA headquarters decided that the Denver courthouse project would serve as a demonstration project to showcase a number of sustainable design features, such as solar panels, light shelves, and automated heating and air-conditioning controls. These project changes increased the estimated cost of construction by $5 million. According to the Cleveland project manager, problems with contractors and a design error increased the actual costs of the project. Although the project was originally intended to use design-bid-build procurement, because of design delays, the construction schedule was divided into three phases, and construction started before the design was completed. When the contractor fell behind in the second phase, GSA followed the advice of its construction manager and became the general contractor for the final construction phase in an effort to avert potential claims arising from the second phase delays. GSA managed over 10 contracts in the final construction phase. According to the GSA claims attorney involved in the project, GSA’s taking on the role of general contractor accounted for the large number of claims paid on the project. GSA settled the claims for approximately $20.8 million, or 12 percent of the estimated total cost that was submitted with the construction prospectus. In addition, construction costs increased when a design error that underestimated the size of certain steel beams was corrected, and special beams had to be manufactured and imported from an overseas supplier. Finally, a general contractor’s inability to maintain the construction schedule and meet its obligations to building material suppliers caused the construction phase of the Albany project to be extended 3 years beyond its anticipated completion. Eventually, the general contractor’s surety company, which guaranteed the contractor’s ability to perform the work, took over the management of the project and brought it to completion. GSA still had to settle claims brought against the project by the contractor’s surety. Although GSA was able to limit the actual cost increase to 7.9 percent over the estimate submitted to Congress, the relatively small building took 5 years to construct. The courthouse was built on a site donated by the city of Albany as part of a downtown redevelopment project. The courthouse was designed and constructed by a partnership between GSA and Section 8(a) firms, which are socially or economically disadvantaged small businesses. Several project managers also noted the effect that GSA’s time-out and review initiative had on the early planning for the projects. The principal motivation of GSA’s time-out and review initiative was to cut costs, reevaluate priorities, and improve the management of the federal buildings program. For the courthouse construction program, GSA reevaluated priorities and trimmed the costs of existing projects, identifying savings of $324 million from 43 courthouse projects. For example, as a result of time-out and review, the estimated cost of the Cleveland project was reduced by $63 million or about 26 percent. However, in this project, much of the savings were not realized and had to be added back into the project during construction. In 1991, the judiciary issued the U.S. Courts Design Guide, which specified the judiciary’s criteria for designing new court facilities. The Design Guide provides specific guidelines for the size, design requirements, security, and other features of courtrooms, judges’ chambers, and other court-related space. Significant departures from the Design Guide criteria must be justified by the district courts and approved by the Circuit Judicial Council for the judicial circuit where the project is located. The Design Guide has been revised several times in response to economic constraints and is being reevaluated during the judiciary’s current moratorium to determine if additional revisions are appropriate. Departures from the Design Guide are often thought to increase courthouse project costs. However, we found few departures from the Design Guide in the projects we reviewed, and most of them were made to increase the building’s functionality. The project managers said none of the departures resulted in an increase in the building size. We were not able to quantify the costs associated with the departures, but according to the project managers, their impact on cost was minimal. In the Albany courthouse, ceilings were lowered by 1 to 2 feet, which reduced costs and allowed the magistrate judge courtrooms on the floor above to be built to the size of district courtrooms to meet future expansion needs. In Gulfport, the judiciary obtained approval to include a special proceedings courtroom in the new courthouse. These courtrooms are 600 square feet larger than a traditional district courtroom and are used for multidefendant trials or special events, such as naturalization ceremonies. In Cleveland, increases in the size of the grand jury suite and magistrate judges’ courtrooms were accommodated within the planned size of the building by reducing the size of other court spaces. For the seven projects we reviewed in detail, GSA project managers used several strategies to reduce costs and keep them within budget. These strategies included value engineering, modified contracting methods, and a variety of approaches for involving and communicating with tenant agencies. On the basis of estimates provided by GSA, Congress authorizes and appropriates funds for individual courthouse construction projects. GSA sets each project budget according to the appropriated funds and seeks to manage each project to the specified budget. For the seven projects we reviewed, GSA project managers used value engineering during the design phase to identify cost-saving changes and to reduce costs. Project managers also used value engineering as the primary method to reduce costs to meet the budget when the initial construction bids exceeded the project’s budget. Value engineering is an organized effort to analyze the functions of systems, equipment, and facilities for the purpose of achieving the essential functions at the lowest cost possible while maintaining performance, reliability, quality, and safety. Changes resulting from value engineering ranged from using less expensive materials than originally planned to making changes in scope that affected the features built into the courthouse. Some changes made as a result of value engineering permanently reduced building costs while other changes deferred costs to later years. In a commitment to continue cost reduction after the time-out and review process of the mid-1990s, GSA emphasized the use of value engineering as a method to reduce costs below the approved budgets. The Office of Management and Budget requires executive branch agencies to use value engineering as appropriate to reduce program and acquisition costs while maintaining necessary quality levels. For the projects we reviewed, GSA project managers generally hired outside consultants to perform value engineering studies during the design phase to identify potential areas for cost savings. Project managers used value engineering again for four of the seven projects, when the construction bids exceeded the project’s budgets. The estimated cost at construction or the construction bids exceeded the budget by $2 million to $16 million, or 6 to 18 percent, for these four projects. The project managers tasked the contractors that were bidding on the construction phase of the project to submit ideas for cutting costs. This approach allowed GSA to reduce the bids to within the budget without redesigning the building. Having to redesign the building, then going through another bidding process is time consuming; and as discussed earlier, starting construction later than planned can lead to cost increases. Many relatively small changes were often made as a result of value engineering to reduce projects’ costs. The most common change for all seven projects was substituting less expensive materials for more expensive materials that were originally called for in the design. For example, using commercially available products rather than custom-made materials lowered costs. These material substitutions often had no or minimal impact on the appearance and functionality of the building. For example, in two courthouse projects, wainscoting was used in place of full-height wood paneling. For the Seattle project, GSA removed the copper cladding from the roof after determining that its removal would not negatively affect the appearance or durability of the building. The court officials involved in the seven projects told us that they participated in the value engineering sessions and agreed with the changes to reduce the construction costs. These officials understood that there was a limited budget and made trade-offs to get the features they wanted the most. For example, in Las Vegas the judges agreed to reduce the amount of limestone used on the outside of the building so that they could keep wood paneling in the courtrooms. Other value engineering changes resulted in the elimination or reduction of spending on some features, such as building systems, to reduce projects’ costs. While these changes lowered the construction costs, some could increase future operating and maintenance costs. In Las Vegas, a window-washing platform was eliminated to save $250,000. According to the GSA building manager, it now costs about $30,000 to wash the courthouse’s windows, because special equipment is needed. As a result, the windows are seldom washed. For two projects, GSA eliminated the funds for heating and air-conditioning systems from the construction contracts and entered into energy savings and performance contracts (ESPC) to procure these systems. Under an ESPC, the contractor purchases and installs the heating and air-conditioning systems and GSA pays for the systems over the life of the contract, for as long as 25 years, from its operating budget. It is expected that the contractor will install a more energy-efficient system than would have been installed without the ESPC and that the cost of the system will be paid for from the savings attributed to a more efficient system. In new construction, energy savings are estimated using many assumptions about energy usage and costs, since there are no actual systems and costs on which to base estimates of expected savings. In December 2004, we reported that using ESPCs to install heating and air-conditioning systems is more expensive than funding the installation of such systems up front as part of the construction costs. In that review, we estimated that the use of an ESPC for the Gulfport Federal Courthouse might cost about $2.5 million, compared with about $1.6 million if the system had been installed as part of the construction. This is an increase of about 56 percent in the cost of the heating and air conditioning system. We found that GSA focused on reducing the construction costs, so that it could award the construction contract, rather than on the long-term cost implications of using an ESPC. On three projects, project managers identified the contracting method as a strategy they used to help control costs and keep the project on schedule. One project involved the construction contractor in the design phase of the project while another included incentive award clauses in the construction contract. The third project used versions of both of these approaches. GSA traditionally approaches a new construction project by designing the building and then soliciting bids to construct the building based on the design. This is referred to as the design-bid-build method of contracting. In this traditional method, the construction contractor is not involved in the design process and often has questions about the design, which can lead to changes during construction. To reduce the risk of changes during construction and accelerate the project’s schedule, the Las Vegas project manager used a design-build bridging contract method. Under this contracting method, the project began with a traditional design phase to develop the concept for the building. The concept design identified the basic structure of the building, including the layout of courtrooms and chambers on each floor. GSA then advertised for a contractor to complete the detailed building design and construct the building. The winning contractor was a joint venture between an architectural firm and a general contractor. This approach allowed construction to begin as soon as the design was completed, thus saving time and reducing the chances of the tenants’ requirements changing between the time of design completion and the start of construction. In addition, the architect and builder were with the same firm, so when issues came up during construction, each had an interest in arriving at solutions rather than finger-pointing and blaming each other. According to the project manager, as a result of this contracting method, relatively few changes were made on the project during construction. For the Gulfport courthouse project, GSA hired the general contractor during the design stage when the building’s design was only 35 percent complete. The project manager believed that involving the general contractor in the development of the design and construction documents would minimize the number of questions the contractor would have about the design and thus minimize the number of change orders. Change orders on a project may increase the time needed to construct the building and increase the cost of construction. The project manager believed that this was a successful approach because there were few questions about the design during construction and relatively few change orders due to design issues. GSA used construction contracts with incentive award clauses for the Gulfport and Seattle courthouses. The incentive awards required periodic reviews of the contractors’ performance throughout the projects, which ensured a certain level of communication. The project manager for the Seattle courthouse said that this method forced the stakeholders to communicate and address issues that, without the incentive award, might not have been addressed until the end of the project. The use of incentive awards is intended to increase communication and help control the projects’ overall costs. The contractor on the Gulfport project earned 85 percent of the incentive award, and the contractor on the Seattle project earned about 92 percent of the incentive award. GSA project managers and judiciary officials said the involvement of tenant agencies and open and continual communication with them on the projects we reviewed were important to the successful completion of the projects and to controlling their costs. Judges at each of the courthouses said the new buildings met their requirements, and they were all very happy with the new courthouses. GSA project managers used a variety of strategies, such as regular meetings and courtroom mock-ups, to identify changes prior to construction, to involve tenant agencies in planning the courthouse projects, and to keep them informed about the progress of the projects. The judiciary also generally hired its own project manager to oversee each of the projects and to facilitate communication between GSA and the judiciary. In addition, many of the project managers used a Web-based project management tool to facilitate communication among the construction contractors. Involving tenant agencies and incorporating their interests into a project, particularly during the planning stages, is one of the five components of a leading practices framework. Project managers agreed that working with tenant agencies to define their requirements and keeping tenants informed about the project were important to getting the agencies’ “buy-in” on the project and to minimizing changes during construction. Making changes during a project’s design to ensure that tenants’ requirements are met is generally less costly than making changes during construction. Leading practices in capital project management suggest frequent communication and involvement through such means as meetings and correspondence. GSA project managers and judiciary officials who represented the various courts’ interests said that judiciary officials were actively involved from the conception of the projects. Project documents show that other tenant agencies were also involved throughout the projects. While all of the project meetings included tenant agency representatives, GSA and judiciary officials said that using courtroom mock-ups and having a judicial project manager were important strategies used to facilitate communication. All project managers used courtroom mock-ups in which a full-size model of a courtroom was constructed and the judges and other courtroom participants evaluated the model for such things as sight lines and the placement of furniture. The courtroom mock-ups resulted in changes to courtroom designs, and, according to GSA, no major changes were required during or after the construction of the courtrooms to correct deficiencies. Thus, the courtrooms met the judges’ requirements, and costs were avoided by making necessary changes prior to construction. According to judiciary officials, there was open communication between the judiciary and GSA on six of the seven projects. These officials said that the collegial relationships they developed with GSA facilitated communication and allowed them to work together to control and, when necessary, to reduce costs in a constructive way. For these six projects, the judiciary had its own project manager, who interacted with GSA on a regular basis. The judges said that it was critical for them to have this project manager, who was knowledgeable about construction, and could advise the judges on suggested changes and facilitate communication with GSA. The USMS and the U.S. Attorneys Office were also major tenants in most of the courthouses we reviewed. According to GSA project managers, these agencies were involved in the projects to a lesser extent than the judiciary. USMS officials said that their level of involvement varied, depending on the project and the project manager involved. As discussed, some of the cost increases during construction resulted from the USMS’s and the U.S. Attorneys Office’s requirement changes. USMS changes primarily resulted from increases in security standards, which could not have been anticipated prior to construction. Changes involving the U.S. Attorneys Office more often resulted because of its decisions about moving to the new building. As noted, GSA’s policies and procedures have changed over the last several years, and GSA now requires tenant agencies to sign occupancy agreements prior to construction. Such agreements define the amount of space the tenant will occupy and the rental cost for the space. This policy should eliminate last minute questions about which tenants will occupy the building and the amount of space they will occupy. Finally, GSA used commercially available Web-based project management tools for several of the projects to facilitate communication among the contractors. These tools facilitate communication by reducing paperwork; electronically assigning responsibility for tasks; tracking changes, questions, and answers; and providing all contractors with access to the same information as appropriate. For example, if a contractor has a question about the design of a particular building element, it can submit a question to the architect; and GSA can track the question and response to ensure that the question is resolved as quickly as possible. The Seattle project manager highlighted the importance of having clearly defined design and construction requirements. The manager said that in Seattle, he was able to reduce the construction bid by meeting with the subcontractors to answer their questions about the building requirements. If subcontractors do not fully understand an aspect of the design, they will build in additional costs to cover their risk. By clarifying the building requirements, GSA was able to reduce the subcontractors’ risk and thus reduce their bids on the project. During the last decade, GSA has implemented a number of initiatives to enhance and improve the performance of the courthouse construction program. Among these initiatives are enhancements to the benchmarking system, the use of courtroom mock-ups, and the ongoing development of project management practice through the Project Management Center of Expertise. GSA’s Center for Courthouse Programs is also conducting independent cost estimates and quality control reviews at three points during the design phase of projects to help ensure that courthouse projects can be built within budget and the quality of the buildings is not being sacrificed to stay within budget. While the results of some of these initiatives were apparent in the seven projects we reviewed, such as with the courtroom mock-ups, the effects of the more recent efforts to enhance the program are not captured in our data collection. This situation occurred because many of the projects in our universe were already fairly advanced by time the more recent initiatives were introduced. Courthouse construction is a process that evolves over many years and includes multiple stakeholders. Many factors can affect the cost of a courthouse project as it moves from planning and design to construction. Our work showed that the most significant cost changes occurred between the time of GSA’s request for design and its request for construction funding. Some reasons for cost increases, such as the need for additional security or changing market conditions, affected several projects and could not have been easily anticipated. Other reasons for cost changes were unique to individual projects. It is important to provide decision makers with information about the costs, risks, and scope of projects before resources are committed. Such a practice would be consistent with our past work on leading practices in capital decision making. In the case of courthouse projects, GSA does not consistently explain project changes in documents provided to congressional decision makers. These changes may only be apparent if congressional decision makers compare the information submitted with the construction funding request to the information submitted, sometimes years earlier, with the design funding request. To improve the usefulness of the information on courthouse construction projects that GSA provides to Congress, we recommend that the Administrator of GSA, when requesting funding for those projects, identify and explain changes in estimated costs and building size from the information provided to Congress in prior project prospectuses or fact sheets. We provided a draft copy of this report to the Administrator of the General Services Administration and the Director of the Administrative Office of the U. S. Courts for their review and comment. On June 24, 2005, GSA provided us with written comments and concurred with our recommendation (see app. III). GSA noted that in 2004 it began notifying Congress when significant changes in scope and budget occurred in courthouse projects. While GSA started notifying the authorizing committees of significant changes to projects in 2004, it has not been notifying the appropriation committees of these changes. We believe that all the stakeholders should have the same information, and changes to the project should be included in the prospectuses as part of the funding process. GSA also noted changes it has made over the years to how it plans, budgets, and manages courthouse projects and provided technical clarifications, which we have incorporated in this report as appropriate. AOUSC provided technical clarifications, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committee, the Administrator of GSA, and the Director of the Administrative Office of the U.S. Courts. Copies will also be made available to other interested parties on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6670 or GoldsteinM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objectives of our report were to (1) compare estimated and actual costs for recently completed courthouse projects, (2) identify factors that contributed to differences between the estimated and actual costs of selected projects, and (3) identify strategies that were used to help control the costs of selected projects. To address these objectives, we reviewed project prospectuses and courthouse expenditure data; interviewed General Services Administration (GSA) and judiciary officials; and conducted a detailed review of seven completed courthouses around the country. We identified a total of 38 new courthouse construction projects completed since 1998 from information supplied by GSA’s Center for Courthouse Programs (CCP). We chose 1998 as a starting date to exclude the projects we had considered in our previous report on the courthouse construction program and to include only those projects that were designed and built during the period when a number of changes were made to the program, such as the implementation of 5-year plans by the Administrative Office of the U.S. Courts (AOUSC) and the establishment of the CCP. To determine estimated costs, we examined prospectuses and fact sheets submitted to Congress during the appropriations process. For 35 of the 38 projects completed since 1998, at least one estimate of total project cost was provided to Congress. Three projects did not go through the typical approval and funding process. GSA typically submits two requests for funding, one in the prospectus for design funding and another in the prospectus for construction funding, but this is not always the case. For some projects, the initial estimate was submitted in the form of a “Report of Building Project Survey,” sometimes called an 11(b) report after the section of the Public Buildings Act of 1959, which provides for such a report. In other cases, the estimate was submitted in the form of a one-page fact sheet either as a supplement to or in lieu of a prospectus. Prospectuses and fact sheets typically contain an estimated total project cost as the sum of separate estimates for site acquisition, design, management and inspection, and construction. For some projects, we added the construction cost estimate to the amounts previously appropriated for design and site acquisition to arrive at a total project cost estimate. To determine changes in proposed building size and parking, we compared documents submitted for the construction phase funding with those submitted for the design phase funding. To determine actual costs, we used data provided by GSA’s Public Building Service (PBS) Budget Office for all courthouse projects completed since 1998. We defined actual costs as all obligations recorded against each project through the end of fiscal year 2004 plus any claims paid from the U.S. Treasury Department’s Judgment Fund. According to information supplied by the PBS Budget Office, 13 of the 38 projects that we examined had at least one claim paid from the Judgment Fund, ranging from $65,000 on the St. Louis courthouse to over $20 million on the Cleveland courthouse. Claims are paid from the Judgment Fund when there are no funds left available in the project budget to pay the claims. The reported actual costs of the courthouse projects include only funds budgeted by GSA and specifically authorized by Congress for new construction and exclude items funded by the tenant agencies. We also reviewed appropriation acts for fiscal years 1993 through 2005 to identify funding appropriated for new courthouse construction projects and other relevant legislation relating to GSA’s construction authority. To compare actual with estimated costs, we calculated the percentage by which actual costs differed from estimates of the costs. When more than one estimate was provided for a project, we compared the actual costs with the initial and latest estimates. For example, when an 11(b) report was prepared for a project, we used this document as a source for the original estimate. Similarly, when a fact sheet was submitted after a construction prospectus for a project, we used the estimate provided in the fact sheet. To identify factors that contributed to differences between estimated and actual costs and to identify the types of strategies used to controls costs, we selected seven courthouses whose construction was completed between 2000 and 2004: Albany, Georgia; Cleveland, Ohio; Denver, Colorado; Erie, Pennsylvania; Gulfport, Mississippi; Las Vegas, Nevada; and Seattle, Washington. To select these courthouses, we considered a number of factors, including the range and scope of the cost changes, the size of the project, and the geographic location. For each of these seven courthouses, we obtained estimated and actual cost information by reviewing prospectuses, 11(b) reports, and fact sheets submitted to Congress and budgetary expenditure data provided by GSA. During our visits to the seven courthouses, we reviewed the relevant project files from GSA. We looked specifically for documentation of factors that contributed to or helped control cost changes, such as scope modifications, contractor and bid documents, change orders, and claims. We also interviewed GSA and judiciary officials responsible for each courthouse project, including judges, project managers, contracting officers, and other individuals involved during the design and construction phases of the courthouse. We also interviewed judiciary officials associated with the projects including Administrative Office of the U.S. Courts (AOUSC) officials and judges. From the interviews and project file reviews, we obtained information on the extent of and reasons for the cost changes. We also reviewed GSA and AOUSC documents related to management controls, policies, procedures, and guidance for courthouse construction projects. For the estimated costs of the 38 courthouse projects, we relied on the original source documents, including the prospectuses that GSA provided to Congress. We assessed the reliability of actual cost data provided by GSA’s PBS Budget Office by (1) reviewing documents describing policies and procedures for the administrative control of funds, (2) interviewing knowledgeable agency officials about the data, and (3) reviewing an independent auditor’s report. We determined that the data were sufficiently reliable for the purposes of this report. We also corroborated much of the testimonial information provided by GSA and judiciary officials during our seven courthouse reviews by obtaining documentation of project management and cost changes during our file reviews. Because we selected a nonprobability sample of courthouses to review in detail, our findings are not generalizable to the 38 projects. In addition to those named above, Lindsay Bach, Maria Edelstein, Bess Eisenstadt, Daniel Hoy, David E. Sausville, Dave Stikkers, and Dorothy Yee made key contributions to this report. | The General Services Administration (GSA) and the federal judiciary are in the midst of a multibillion-dollar courthouse construction initiative aimed at addressing the housing needs of federal district courts and related agencies. From fiscal year 1993 through fiscal year 2005, Congress appropriated approximately $4.5 billion for 78 courthouse construction projects. GAO (1) compared estimated and actual costs for recently completed courthouse projects and determined what information GSA provided to Congress on changes to proposed courthouse projects, (2) identified factors that contributed to differences between the estimated and actual costs of seven projects selected for detailed review, and (3) identified strategies that were used to help control the costs of the seven selected projects. The actual costs of courthouse construction projects exceeded the estimated costs submitted to Congress at the design and construction phases by an average of 17 percent and 5 percent, respectively, and the reasons for the cost changes were not consistently explained. The actual costs were closer to the estimates provided at the construction phase, but the actual cost still varied widely from the estimate for some projects. Both the estimated cost and the proposed building size often changed between the two funding requests. GSA did not always indicate that changes had occurred or explain the reasons for the changes. Including this information would be consistent with leading practices in capital decision making. For the seven projects GAO reviewed in detail, most cost changes resulted from changes to the project's scope or from postponing the start of construction. For example, scope changes called for by security requirements and revisions to the U.S. Marshals Service's Design Guide increased the costs of some projects. Postponing the start of construction also increased costs because of inflation and changes in local market conditions. Factors that led to postponing construction included difficulties with site acquisition and GSA receiving funding later than anticipated. GSA used several strategies to help reduce or control costs for the seven projects, including value engineering, modified contracting methods, and involving tenant agencies. Value engineering was used during design on all projects, and in some cases, resulted in the use of less expensive materials to finish the courthouse interiors, but in other cases resulted in changes that could increase the long-term cost of operating the buildings. Some project managers used modified contracting methods to control costs by reducing the time between the design and construction phases. Project managers also used a variety of approaches for involving tenant agencies in decisions about the building design and informing them about the progress of the project. |
In our four annual reports issued from 2011 through 2014, we identified over 180 areas with approximately 440 actions that the executive branch and Congress could take to address fragmentation, overlap, and duplication; achieve other cost savings; or enhance revenue. Figure 1 outlines the definitions we use for fragmentation, overlap, and duplication. Although it may be appropriate for multiple agencies or entities to be involved in the same programmatic or policy area due to the nature or magnitude of the federal effort, the instances of fragmentation, overlap, or duplication that we include in our annual reports are in areas where multiple programs and activities may be creating inefficiencies. We consider programs or activities to be fragmented when more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national need, which may result in inefficiencies in how the government delivers services. We have identified fragmentation in multiple programs we reviewed. For example, in our 2014 annual report, we reported that the Department of Defense (DOD) does not have a consolidated agency-wide strategy to contract for health care professionals, resulting in a contracting approach that is largely fragmented. Although some of the military departments attempted to consolidate their health care staffing requirements through joint-use contracts, such contracts only accounted for approximately 8 percent of the $1.14 billion in obligations for health care professionals in fiscal year 2011. Moreover, in May 2013 we identified several instances in which a single military department awarded numerous task orders for the same type of health care professional in the same area or facility. For example, we identified 24 separate task orders for contracted medical assistants at the same military treatment facility. By not consolidating its requirements, this facility missed the opportunity to achieve potential cost savings and other efficiencies. Fragmentation also can be a harbinger of overlap or duplication. Overlap occurs when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. We found overlap among federal programs or initiatives in a variety of areas, including housing assistance. In particular, in our 2012 annual report, we reported that 20 different entities administered 160 programs, tax expenditures, and other tools that supported homeownership and rental housing in fiscal year 2010. In addition, we identified 39 programs, tax expenditures, and other tools that provided assistance for buying, selling, or financing a home and 8 programs and tax expenditures that provide assistance to rental property owners. We found overlap in products offered and markets served by the Department of Agriculture’s (USDA) Rural Housing Service and the Department of Housing and Urban Development’s Federal Housing Administration, among others. In August 2012, we questioned the need for maintaining separate programs for rural areas. In other areas, we found evidence of duplication, which occurs when two or more agencies or programs engage in the same activities or provide the same services to the same beneficiaries. For example, we reported in 2013 that a total of 31 federal departments and agencies invested billions of dollars to collect, maintain, and use geospatial information— information linked to specific geographic locations that supports many government functions, such as maintaining roads and responding to natural disasters. We found that federal agencies had not effectively implemented policies and procedures that would help them identify and coordinate geospatial data acquisitions across the government. As a result, we found that agencies made duplicative investments and risk missing opportunities to jointly acquire data and save millions of dollars. In addition, opportunities exist to reduce the cost of government operations or enhance revenue collections. For example, our body of work has raised questions about whether DOD’s efforts to reduce headquarters overhead will result in meaningful savings. In 2013, the Secretary of Defense directed a 20 percent cut in management headquarters spending throughout DOD, to include the combatant commands and service component commands. However, our work found that mission and headquarters-support costs for the five geographic combatant commands and their service component commands we reviewed more than doubled from fiscal years 2007 through 2012, to about $1.7 billion. We recommended that DOD more systematically evaluate the sizing and resourcing of its combatant commands. If the department applied the 20 percent reduction in management headquarters spending to the $1.7 billion DOD used to operate and support the five geographic combatant commands in fiscal year 2012, we reported that DOD could achieve up to an estimated $340 million in annual savings. In our 2013 report, we reported that refining return-on-investment measures could improve how the Internal Revenue Service (IRS) allocates enforcement resources (subject to other considerations, such as minimizing compliance costs and ensuring equitable treatment of taxpayers). Our work illustrated that a small shift in existing resources— from examinations of less productive groups of tax returns to more productive groups—could potentially increase enforcement revenue by more than $1 billion. In addition, in our 2011 annual report, we identified opportunities for improving the Department of the Interior’s management of federal oil and gas resources. In particular, increasing the diligent development of federal lands and waters leased for oil and gas exploration and production and considering adjustments to Interior’s royalty rates to a level that would ensure the government a fair return, among other actions, could result in approximately $2 billion in revenues over 10 years. We found that the executive branch agencies and Congress have made progress in addressing the actions identified in our 2011–2014 annual reports. As shown in table 1, of the approximately 440 actions needed in these areas, 135 (29 percent) were addressed, 202 (44 percent) were partially addressed, and 103 (22 percent) were not addressed as of November 2014. Examples of progress made include DOD and Congressional actions to reduce DOD’s fragmented approach for acquiring combat uniforms. In 2013, we found that DOD’s fragmented approach could lead to increased risk on the battlefield for military personnel and increased development and acquisition costs. In response, DOD developed and issued guidance on joint criteria that will help to ensure that future service-specific uniforms will provide equivalent levels of performance and protection. In addition, a provision in the National Defense Authorization Act for Fiscal Year 2014 established as policy that the Secretary of Defense shall eliminate the development and fielding of service-specific combat and camouflage utility uniforms in order to adopt and field common uniforms for specific environments to be used by all members of the armed forces.Subsequently, the Army chose not to introduce a new family of camouflage uniforms into its inventory, in part, because of this legislation, resulting in a cost avoidance of about $4.2 billion over 5 years. In addition, progress has been made in addressing the proliferation of certain programs. For example, the National Science and Technology Council’s (NSTC) implemented our suggested actions to better manage overlap across science, technology, engineering, and mathematics (STEM) education programs. Specifically, NSTC released guidance to agencies on how to align their programs and budget submissions with the goals of NSTC’s 5-Year strategic plan for STEM education and on developing evaluations for the programs. In addition, several programs were eliminated or consolidated into new programs, with the total number of STEM education programs dropping from 209 funded in fiscal year 2010 to 158 funded in 2012. The President’s fiscal year 2016 budget proposes to further consolidate and eliminate 20 STEM programs across eight agencies. These efforts help agencies better target resources toward programs with positive outcomes and achieve the greatest impact in developing a pipeline of future workers in STEM fields. As another example, the Workforce Innovation and Opportunity Act, enacted in July 2014, includes provisions to strengthen the workforce development system under which a variety of employment and training services are provided to program participants. In particular, the law requires that states develop a unified state plan that covers all designated core programs to receive certain funding. States’ implementation of the requirement may enable them to increase administrative efficiencies in employment and training programs—a key objective of our prior recommendations on employment and training programs. We estimated that executive branch and congressional efforts to address suggested actions resulted in roughly $20 billion in financial benefits from fiscal years 2011 through 2014, with another approximately $80 billion in additional benefits projected to be accrued through 2023. For example, in our 2011 annual report, we stated that the ethanol tax credit would cost about $5 billion in forgone revenues in 2011 and that Congress could reduce annual revenue losses by addressing duplicative federal efforts directed at increasing domestic ethanol production. To reduce these revenue losses, we suggested that Congress consider whether revisions to the ethanol tax credit were needed and suggested options to consider, including allowing the credit for the volumetric ethanol excise tax (for fuel blenders that purchase and blend ethanol with gasoline) to expire at the end of 2011. Congress allowed the tax credit to expire at the end of 2011. In our 2012 annual report, we presented options for adjusting the Transportation Security Administration’s (TSA) passenger security fee—a uniform fee on passengers of U.S. and foreign air carriers originating at airports in the United States—to offset billions of dollars in civil aviation security costs. The Bipartisan Budget Act of 2013 modifies the passenger security fee from its current per enplanement structure ($2.50 per enplanement with a maximum one-way-trip fee of $5.00) to a structure that increases the passenger security fee to a flat $5.60 per one-way-trip, effective July 1, 2014. Specifically, this legislation identifies $12.6 billion in fee collections that, over a 10-year period beginning in fiscal year 2014 and continuing through fiscal year 2023, will contribute to deficit reduction. Fees collected beyond those identified for deficit reduction are available, consistent with existing law, to offset TSA’s aviation security costs. This fee is expected to cover 43 percent of aviation security costs beginning in fiscal year 2014, compared with the approximately 30 percent offset under the previous fee structure. Table 2 outlines a number of addressed actions that resulted in or are expected to result in cost savings or enhanced revenue. We plan to release an update on the status of all actions presented in our 2011–2014 reports in conjunction with our next annual report in April 2015. The executive branch agencies and Congress have made progress in addressing some suggested actions, but many other actions require leadership attention to ensure that they will be fully addressed. More specifically, 68 percent of actions directed to Congress and 66 percent of actions directed to executive branch agencies identified in our 2011–2014 annual reports remain partially addressed or not addressed. As illustrated below, our work identified areas of fragmentation, overlap, or duplication that spanned the range of government activities, along with opportunities to address these issues. Without increased or renewed leadership focus, opportunities will be missed to improve the efficiency and effectiveness of programs and save taxpayer dollars. Our work on defense has highlighted opportunities to address overlapping and potentially duplicative services that result from multiple entities providing the same service, including the following examples: Defense Satellite Control Operations: In our 2014 annual report, we reported that DOD has increasingly had deployed dedicated satellite control operations networks, as opposed to shared networks that support multiple kinds of satellites. For example, at one Air Force base in 2013, eight separate control centers operated satellites for 10 satellite programs. Furthermore, the Air Force alone funded about $2.1 billion in While dedicated networks can fiscal year 2011 on satellite operations.offer some benefits to programs, they also can be more costly to maintain and have led to fragmented and potentially duplicative networks that require more infrastructure and personnel to manage as compared with shared networks. We suggested that DOD take actions to improve its ability to identify and then assess the appropriateness of a shared versus dedicated satellite control system, which DOD has begun to address. Electronic Warfare: We reported in 2011 that all four military services in DOD had been separately developing and acquiring new airborne electronic attack systems and that spending on new and updated systems was projected to total more than $17.6 billion during fiscal years 2007–2016. While the department has taken steps to better inform its investments in airborne electronic attack capabilities, it has yet to assess its plans for developing and acquiring two new expendable jamming decoys to determine if these initiatives should be merged. For example, in fiscal year 2015 one DOD decoy system is already in production, while DOD defines performance requirements for another decoy system. Without an assessment for potential duplication, DOD may preclude the timely identification and prevention of unnecessary overlap between its systems. Unmanned Aircraft Systems: We reported in 2012 that DOD’s cost estimates for acquisition programs for unmanned aircraft systems (UAS) and related systems exceeded $37.5 billion for fiscal years 2012–2016. We found that military service-driven requirements, rather than an effective department-wide strategy, led to overlap in DOD’s UAS capabilities, resulting in programs and systems being pursued that have similar flight characteristics and mission requirements. To reduce the likelihood of overlap and potential duplication in DOD’s UAS portfolio, we suggested several actions to DOD that have not been fully implemented. The overlap in current UAS programs, as well as continued potential for overlap in future programs, shows that DOD must do more to implement these actions. Our analysis suggests that the potential for savings would be significant, and with DOD’s continued commitment to UAS for meeting strategic requirements, action is all the more imperative. More broadly, we identified multiple weaknesses in the way DOD acquires weapon systems and the actions that are needed to address these issues, which we recently highlighted in our high-risk series update. For example, further progress must be made in tackling the incentives that drive the acquisition process and its behaviors, applying best practices, attracting and empowering acquisition personnel, reinforcing desirable principles at the beginning of programs, and improving the budget process to allow better alignment of programs and their risks and needs. Addressing these issues could help DOD improve the returns on its $1.5 trillion investment in major weapon systems and find ways to deliver capabilities for less than it has in the past. The federal government plans to spend $79 billion on information technology (IT) in fiscal year 2015. The magnitude of these expenditures highlights the importance of avoiding duplicative investments to better ensure the most efficient use of resources. Opportunities remain to reduce duplication and the cost of government operations in critical IT areas, many of which require agencies to work together to improve systems, including the following examples: Information Technology Investment Portfolio Management: To better manage existing IT systems, the Office of Management and Budget (OMB) launched the PortfolioStat initiative. PortfolioStat requires agencies to conduct an annual, agency-wide review of their IT portfolios to reduce commodity IT spending and demonstrate how their IT investments align with their missions and business functions, among other things. In 2014, we reported that while the 26 federal agencies required to participate in PortfolioStat had made progress in implementing OMB’s initiative, weaknesses existed in agencies’ implementation of the initiative, such as limitations in the Chief Information Officers’ authority. As noted in our recent high-risk update, we made more than 60 recommendations to improve OMB and agencies’ implementation of PortfolioStat and provide greater assurance that agencies will realize the nearly $6 billion in savings they estimated they would achieve through fiscal year 2015. Federal Data Centers: In 2014, we found that consolidating federal data centers would provide an opportunity to improve government efficiency and achieve cost savings and avoidances of about $5.3 billion by fiscal year 2017. Although OMB has taken steps to identify data center consolidation opportunities across agencies, weaknesses exist in the execution and oversight of the consolidation efforts. For example, we previously reported that all 24 departments and agencies in the Federal Data Center Consolidation Initiative had not yet completed a data center inventory or the consolidation plans to implement their consolidation initiative. It will continue to be important for agencies to complete their inventories and implement their plans for consolidation to better ensure continued progress toward OMB’s planned consolidation, optimization, and cost-savings goals. DOD and Department of Veterans Affairs (VA) Electronic Health Records System: DOD and VA abandoned their plans to develop a single electronic system for health records that both departments would share. Although the departments’ 2008 study showed that over 97 percent of inpatient functional requirements were common to both DOD and VA, they decided to pursue separate electronic health record system modernization efforts. In February 2014, we reported that the departments had based this decision on the assertion that pursuing separate systems would be less expensive and faster than the single, shared-system approach. However, they had not supported this assertion with cost and schedule estimates that compared the separate efforts with estimates for the single-system approach. Through continued duplication of these efforts, the departments may be incurring unnecessary system development and operation costs and missing opportunities to support higher-quality health care for servicemembers and veterans. The departments plan to make the separate systems interoperable as required by law. Given the federal government’s continued experience with failed and troubled IT projects, coupled with the fact that OMB initiatives to help address such problems have not been fully implemented, we added improving the management of IT acquisitions and operations to our 2015 high-risk list. The federal information technology acquisition reforms enacted in December 2014 reinforce a number of the actions that we have recommended to address IT management issues. For example, the law containing these reforms codifies federal data center consolidation, emphasizing annual reporting on cost savings and detailed metric reporting, and OMB’s PortfolioStat process, focusing on reducing duplication, consolidation, and cost savings. If effectively implemented, this legislation should improve the transparency and management of IT acquisitions and operations across the government. Twenty-seven federal agencies plan to spend about $58 billion—almost three-quarters of the overall $79 billion budgeted for federal IT in fiscal year 2015—on the operations and maintenance of legacy (i.e., steady-state) investments. The significance of these numbers highlights the importance of ensuring that OMB’s PortfolioStat and Data Center Consolidation initiatives meet their cost-savings goals. We identified several opportunities to help address the proliferation of certain education and training programs and improve the delivery of benefits, which the executive branch agencies and Congress have been working to address. However, additional opportunities remain to more effectively invest in education and training programs, including the following examples: Teacher Quality: Federal efforts to improve teacher quality led to the creation and expansion of a variety of programs across the federal government; however, there is no government-wide strategy to minimize fragmentation, overlap, or duplication among these programs. Specifically, in our 2011 annual report we identified 82 distinct programs designed to help improve teacher quality, either as a primary purpose or as an allowable activity. Many of these programs (administered across 10 federal agencies) shared similar goals. We suggested that Congress could enact legislation to eliminate teacher quality programs that are too small to evaluate cost-effectively or to combine programs serving smaller target groups into a larger program. In February 2015, the House Committee on Education and Workforce reported the Student Success Act, H.R. 5. According to House Report 114-24, H.R. 5 would consolidate most teacher quality programs into a new flexible grant program. In addition, we suggested that Congress could include legislative provisions to help the Department of Education reduce fragmentation, such as by giving broader discretion to the agency to move resources from certain programs. In February 2015, the Senate Committee on Health, Education, Labor, and Pensions reported the Strengthening Education through Research Act, S. 227, which would authorize the Department of Education to reserve and consolidate funds from Elementary and Secondary Education Act programs to carry out high-quality evaluations and increase the usefulness of those evaluations. These bills, if enacted, could help eliminate some of the barriers to educational program alignment and help invest scarce resources more effectively. Employment for Persons with Disabilities: In June 2012, we reported on 45 programs administered by nine federal agencies that supported employment for people with disabilities and found these programs were fragmented and often provided similar services to similar populations. OMB has worked with executive agencies to propose consolidating or eliminating some of these programs. In particular, three programs were eliminated in the Workforce Innovation and Opportunity Act: the Veterans’ Workforce Investment Program, administered by the Department of Labor, and the Migrant and Seasonal Farmworker Program and Projects with Industry, administered by the Department of Education. However, OMB has not yet systematically looked across all agencies and programs—beyond those already identified in the Departments of Education and Labor— for opportunities to streamline and improve service delivery, which could help achieve greater efficiency and effectiveness. Table 3 outlines these and other examples of opportunities for consolidating or streamlining programs to better provide services. Opportunities also exist to achieve cost savings or enhance revenue collection. As with the opportunities to address fragmentation, overlap and duplication, fully achieving these opportunities will require sustained leadership by executive branch agencies and Congress. Examples of these actions include rescinding unobligated funds, improving fiscal oversight over Medicare and Medicaid, reducing contract spending through strategic sourcing, and increasing tax revenue collections. We reported in March 2013 that the Department of Energy (DOE) was not actively considering any applications under the Advanced Technology Vehicle Manufacturing loan program that was established to provide loans for projects to produce more fuel-efficient passenger vehicles and In our 2014 annual report, we suggested that unless their components.DOE could demonstrate a demand for new loans and viable applications, Congress might wish to consider rescinding all or part of the remaining $4.2 billion in credit subsidy appropriations made available under this program. Since our April 2014 annual report, DOE has not yet demonstrated a demand for these loans that would substantially use the remaining credit subsidy appropriations. The department received four complete applications seeking a total of $945 million in loans, which represents 5.7 percent of the program’s remaining $16.6 billion in loan authority. DOE officials stated that the program anticipated issuing conditional commitments for loans in fiscal year 2015. In January 2015, the Savings, Accountability, Value, and Efficiency Act of 2015 was introduced in the House of Representatives, and includes a provision to rescind unobligated balances of funding for the program, including the remaining credit subsidy appropriations. Improving Fiscal Oversight of Medicare and Medicaid Over the years, we identified a number of actions that have the potential for sizable cost savings through improved fiscal oversight in the Medicare and Medicaid programs. For example, the Centers for Medicare & Medicaid Services (CMS), the agency in the Department of Health and Human Services (HHS) that is responsible for overseeing both programs, could save billions of dollars by improving the accuracy of its payments to Medicare Advantage programs, such as through methodology adjustments to account for diagnostic coding differences between Medicare Advantage and traditional Medicare. In addition, we found that federal spending on Medicaid demonstrations could be reduced by billions of dollars if HHS were required to improve the process for reviewing, approving, and making transparent the basis for spending limits approved for Medicaid demonstrations.between 2002 and 2013 has shown that HHS approved several demonstrations without ensuring that they would be budget neutral to the federal government. In particular, our work To address this issue, we suggested that Congress could require the Secretary of Health and Human Services to improve the Medicaid demonstration review process, through steps such as improving the review criteria, better ensuring that valid methods are used to demonstrate budget neutrality, and documenting and making clear the basis for the approved limits. In September 2014, the Chairman, House Committee on Energy and Commerce, and Ranking Member, Senate Committee on Finance, sent a letter to CMS asking for additional information on steps the agency was taking to improve the budget neutrality of demonstrations. Enhancing the process HHS uses to demonstrate budget neutrality of its demonstrations could save billions in federal expenditures. Reducing Contract Spending through Strategic Sourcing In 2013, we reported that federal agencies could achieve significant cost savings annually by expanding and improving their use of strategic sourcing—a contracting process that moves away from numerous individual procurement actions to a broader aggregated approach. In particular, we reported that a reduction of 1 percent in spending from large procurement agencies, such as DOD, would equate to over $4 billion in savings. However, a lack of clear guidance on metrics for measuring success has hindered the management of ongoing strategic sourcing efforts across the federal government. Since our 2013 report, OMB has made progress by issuing guidance on calculating savings for government-wide strategic sourcing contracts and in December 2014 issued a memorandum on category management, which in part identifies federal spending categories suitable for strategic sourcing. These categories cover some of the government’s largest spending categories, including IT and professional services. As part of this effort, OMB directed the General Services Administration to develop additional guidance and performance metrics. However, until OMB sets government-wide goals and establishes metrics, the government may miss opportunities for cost savings though strategic sourcing. In addition, strategic sourcing could play a role in helping DOD acquire services more efficiently. In our recent high-risk work, we noted that DOD made some progress in acquiring services through strategic sourcing, but had more to do. For example, as of March 2014, DOD had identified some of high-spend categories as candidates for strategic sourcing, such as IT. Further, DOD appointed individuals within specified portfolios of major areas of DOD services spending to help coordinate strategic sourcing efforts. But according to DOD officials, DOD still is developing the roles, responsibilities and authorities for some of these offices. Additionally, the department has not yet issued guidance establishing goals and metrics to track progress. IRS estimated that the gross tax gap—the difference between taxes owed and taxes paid on time—was $450 billion for tax year 2006 (the most recent year for which data were available). IRS estimated that it eventually would recover about $65 billion of this amount through late payments and enforcement actions, leaving a net tax gap of $385 billion. Because of its magnitude, even a 1 percent improvement in net tax gap would generate almost $4 billion in revenue collections annually. Over the last 4 years, our work identified multiple opportunities for the government to increase revenue collections. For example, in 2014, we identified three actions that Congress could authorize and that could increase tax revenue collections from delinquent taxpayers by hundreds of millions of dollars over a 5-year period: limiting issuance of passports to applicants, levying payments to Medicaid providers, and identifying security clearance applicants. For example, Congress could consider requiring the Secretary of State to prevent individuals who owe federal taxes from receiving passports. We found that in fiscal year 2008, passports were issued to about 16 million individuals; about 1 percent of these collectively owed more than $5.8 billion in unpaid federal taxes as of September 30, 2008. According to a 2012 Congressional Budget Office estimate, the federal government could save about $500 million over a 5- year period by revoking or denying passports in cases of certain federal tax delinquencies. Table 4 highlights these and other opportunities that could result in tens of billions of dollars in cost savings or enhanced revenue. Addressing fragmentation, overlap, and duplication within the federal government is challenging. Even with sustained leadership, these are difficult issues to address because they may require agencies and Congress to re-examine (within and across various mission areas) the fundamental structure, operation, funding, and performance of a number of long-standing federal programs or activities with entrenched constituencies. As we have previously reported, these challenges are compounded by a lack of reliable budget and performance information. If fully and effectively implemented, the GPRA Modernization Act of 2010 (GPRAMA) and the Digital Accountability and Transparency of 2014 (DATA Act) hold promise for helping to improve performance and budget information and helping to address challenges in identifying and addressing areas of fragmentation, overlap, and duplication. In particular: GPRAMA establishes a framework aimed at taking a more crosscutting and integrated approach to focusing on results and improving government performance. Effective implementation of GPRAMA could help clarify desired outcomes, address program performance spanning multiple organizations, and facilitate future actions to reduce, eliminate, or better manage fragmentation, overlap, and duplication. The DATA Act requires actions that would help make spending data comparable across programs, allowing executive branch agencies and Congress to accurately measure the costs and magnitude of federal investments. As we have previously reported, better data and a greater focus on expenditures and outcomes are essential to improving the efficiency and effectiveness of federal efforts. We are committed to monitoring the implementation of these acts to improve budget and performance information and help executive branch agencies and Congress address fragmentation, overlap, and duplication. Reducing improper payments could result in significant cost savings. The Improper Payments Information Act of 2002 (IPIA)—as amended by the Improper Payments Elimination and Recovery Act of 2010 (IPERA) and the Improper Payments Elimination and Recovery Improvement Act of 2012 (IPERIA)—requires executive branch agencies to (1) review all programs and activities, (2) identify those that may be susceptible to significant improper payments, (3) estimate the annual amount of improper payments for those programs and activities, (4) implement actions to reduce improper payments and set reduction targets, and (5) report on the results of addressing the foregoing requirements. For the first time in recent years, the government-wide improper payment estimate increased in fiscal year 2014, primarily due to significant increases in the improper payment estimates for Medicare, Medicaid, and the Earned Income Tax Credit (EITC). These programs combined account for over 76 percent of the government-wide estimate. We have made numerous recommendations that if effectively implemented, could help improve program management, reduce improper payments in these programs, and achieve cost savings. While recent laws and guidance have focused attention on the issue, agencies continue to face challenges in reducing improper payments, such as statutory limitations and compliance issues. Agency improper payment estimates totaled $124.7 billion in fiscal year 2014, a significant increase ($19 billion) from the prior year’s estimate of $105.8 billion. The estimated improper payments for fiscal year 2014 were attributable to 124 programs spread among 22 agencies. Table 5 shows the 12 programs with reported improper payment estimates exceeding $1 billion for fiscal year 2014, which accounted for approximately 93 percent of the government-wide estimate. When excluding DOD’s Defense Finance and Accounting Service Commercial Pay program, the reported government-wide error rate was 4.5 percent of program outlays in fiscal year 2014 compared to 4.0 percent reported in fiscal year 2013. The increase in the 2014 estimate is attributed primarily to increased error rates in three major programs: HHS’s Medicare Fee-for-Service, HHS’s Medicaid, and Treasury’s Earned Income Tax Credit. As shown in figure 2, improper payment estimates for Medicare, Medicaid, and the Earned Income Tax Credit accounted for approximately 76 percent of the government-wide estimate for fiscal year 2014. Improper payment estimates for Medicare, Medicaid, and the EITC are among the highest estimates government-wide, and federal spending in Medicare and Medicaid is expected to significantly increase.Consequently, it is critical that actions are taken to reduce improper payments in these programs. Over the past several years, we made numerous recommendations that, if effectively implemented, could improve program management, help reduce improper payments in these programs, and achieve cost savings. In fiscal year 2014, Medicare financed health services for approximately 54 million elderly and disabled beneficiaries at a cost of $603 billion and reported an estimated $60 billion in improper payments. Medicare spending generally has grown faster than the economy, and in the coming years, continued growth in the number of Medicare beneficiaries and in program spending will create increased challenges for the federal government. CMS, which administers Medicare, has demonstrated strong commitment to reducing improper payments, particularly through its dedicated Center for Program Integrity. For example, CMS centralized the development and implementation of automated edits—prepayment controls used to deny Medicare claims that should not be paid—which will help ensure greater consistency in paying only those claims that align with national policies. Additionally, CMS awarded a contract to a Federal Bureau of Investigation-approved contractor that will enable the agency to conduct fingerprint-based criminal history checks of high-risk providers and suppliers. Nevertheless, in our February 2015 update to our high-risk series, we reported that while CMS has demonstrated efforts to reduce improper payments in the Medicare program, improper payment rates have remained unacceptably high. To achieve and demonstrate reductions in the amount of Medicare improper payments, CMS should fully exercise its authority related to strengthening its provider and supplier enrollment provisions and address our open recommendations related to prepayment and postpayment claims review activities. Table 6 summarizes recommendations we made that are still open and procedures authorized by the Patient Protection and Affordable Care Act (PPACA) that CMS should implement to help reduce Medicare improper payments. Specifically, the following actions could help reduce Medicare improper payments. Improving use of automated edits. To help ensure that payments are made properly, CMS uses controls called edits that are programmed into claims processing systems to compare claims data with Medicare requirements in order to approve or deny claims or flag them for further review. In November 2012, we reported that use of prepayment edits saved Medicare at least $1.76 billion in fiscal year 2010, but savings could have been greater if prepayment edits had been more widely used. To promote greater use of effective prepayment edits and better ensure that payments are made properly, we recommended that CMS require Medicare administrative contractors to (1) share information about the underlying policies and savings related to their most effective edits; and (2) improve automated edits that assess all quantities provided to the same beneficiary by the same provider on the same day, so providers cannot avoid claim denials by billing for services on multiple claim lines or multiple claims. Monitoring postpayment claims reviews. CMS uses four types of contractors to conduct postpayment claims reviews to identify improper payments. In July 2013, we found that although postpayment claims reviews involved the same general process regardless of which type of contractor conducted them, CMS had different requirements for many aspects of the process across the four contractor types. Some of these differences might impede efficiency and effectiveness of claims reviews by increasing administrative burden for providers. Furthermore, in July 2014, we reported that while CMS had taken steps to prevent its contractors from conducting certain duplicative postpayment claims reviews, CMS did not have reliable data or provide sufficient oversight and guidance to measure and fully prevent duplication. To improve the efficiency and effectiveness of Medicare program integrity efforts, we recommended that CMS reduce differences between contractor postpayment review requirements, when possible, and monitor the database used to track recovery audit activities to ensure that all data were submitted, accurate, and complete. Removing Social Security numbers from Medicare cards. The health insurance claims number on Medicare beneficiaries’ cards includes as one component the Social Security number of the beneficiary (or other eligible person’s, such as a spouse). This introduces risks that the beneficiaries’ personal information could be obtained and used to commit identity theft. In September 2013, we reported that CMS had not taken needed steps that would result in selecting and implementing a technical solution for removing Social Security numbers from Medicare cards.agency to efficiently and cost-effectively identify, design, develop, and implement a solution to address this issue, we recommended that CMS direct the initiation of an IT project for identifying, developing, and implementing changes that would have to be made to CMS’s affected systems. Implementing actions authorized by PPACA. In addition to provisions to expand health insurance coverage, PPACA provides CMS with certain authorities to combat fraud, waste, and abuse in Medicare. We reported in our February 2015 update to our high-risk series that CMS should fully exercise its PPACA authority related to strengthening its provider and supplier enrollment provisions. For example, CMS should require surety bonds—a three-party agreement in which a company, known as a surety, agrees to compensate the bondholder if the bond purchaser failed to keep a specified promise— for certain providers and suppliers. In fiscal year 2014, the federal share of estimated Medicaid outlays was $304 billion, and HHS reported approximately $17.5 billion in estimated Medicaid improper payments. The size and diversity of the Medicaid program make it particularly vulnerable to improper payments—including payments made for people not eligible for Medicaid or for services not actually provided. CMS has an important role in overseeing and supporting state efforts to reduce and recover improper payments and has demonstrated some leadership commitment in this area. For example, CMS issued guidance to improve corrective actions taken by states. CMS also established the Medicaid Integrity Institute, which provides training and technical assistance to states on approaches to prevent improper payments and guidance on program integrity issues. In our February 2015 high-risk update, we reported that while CMS had taken these positive steps in recent years, in several areas, CMS had still to address issues and recommendations that had not been fully implemented. These issues include improving the completeness and reliability of key data needed for ensuring effective oversight, implementing effective program integrity processes for managed care, ensuring clear reporting of overpayment recoveries, and refocusing efforts on approaches that are cost-effective. Table 7 summarizes recommendations we made that remain open and that CMS should implement to help reduce Medicaid improper payments. Specifically, we recommended the following actions to help reduce Medicaid improper payments and improve program integrity. Improving third-party liability efforts. Congress generally established Medicaid as the health care payer of last resort, meaning that if enrollees have another source of health care coverage—such as private insurance—that source should pay, to the extent of its liability, before Medicaid does. This is referred to as third-party liability. However, there are known challenges to ensuring that Medicaid is the payer of last resort. For example, states have reported challenges working with private insurers, including willingness to release coverage information to states and denying claims for procedural reasons. While CMS has issued guidance to states, we recommended additional actions that could help to improve cost- saving efforts in this area, such as monitoring and sharing information on third-party liability efforts and challenges across all states and providing guidance to states on oversight of third-party liability efforts related to Medicaid managed care plans. Increasing oversight of managed care. Medicaid finances the delivery of health care services to beneficiaries through fee-for-service payments to participating providers and capitated payments to managed care organizations. Most Medicaid beneficiaries are in managed care, and managed care expenditures have been growing at a faster rate than fee-for-service expenditures. In May 2014, we reported that most state and federal program integrity officials we interviewed told us that they did not closely examine managed care payments, focusing on fee-for-service claims instead. To help improve the efficiency and effectiveness of program integrity efforts, we recommended that CMS require states to conduct audits of payments to and by managed care organizations, update managed care guidance on program integrity practices, and provide states with additional support in overseeing managed care program integrity. Strengthening program integrity. CMS has taken positive steps to oversee program integrity efforts in Medicaid, including implementing certain recommendations we made.address issues and recommendations that have not been fully CMS needs to take action to implemented, such as improving reporting of key data, strengthening its efforts to calculate return on investment for its program integrity efforts, and using knowledge gained from its comprehensive reviews of states to better focus audit resources and improve recovery of improper payments. In fiscal year 2014, IRS reported program payments of $65.2 billion for the EITC. According to IRS, an estimated 27.2 percent, or $17.7 billion, of these program payments were improper. The estimated improper payment rate for EITC has remained relatively unchanged since fiscal year 2003 (the first year IRS had to report estimates of these payments to Congress), but the amount of improper EITC payments increased from an estimated $10.5 billion in fiscal year 2003 to nearly $18 billion in fiscal year 2014. GAO-15-290. child requirements, taxpayers’ filing status, and EITC claims associated with complex or nontraditional living situations. Verification errors relate to IRS’s inability to identify individuals improperly reporting income to erroneously claim EITC amounts to which they are not entitled. Verification errors include underreporting and overreporting of income by wage earners as well as taxpayers who report that they are self- employed. Although the EITC program has been modified a number of times since its enactment in 1975 to reduce complexity and help improve the program’s administration, complexity has remained a key factor contributing to improper payments in the program. IRS has undertaken a number of compliance and enforcement activities to reduce EITC improper payments, and in fiscal year 2014 it protected an estimated $3.5 billion in federal revenue. Among other things, IRS uses audits to help identify EITC improper payments, and in June 2014, we reported that about 45 percent of correspondence audits (audits done by mail) that closed in fiscal year 2013 focused on EITC issues. IRS has reported that tax returns with EITC claims were twice as likely to be audited as other tax returns. However, we found that the effectiveness of these audits may be limited because of regular backlogs in responding to taxpayers since 2011 and unclear correspondence that generated additional work for IRS, such as telephone calls to IRS examiners. These issues have imposed unnecessary burdens on taxpayers and costs for IRS. IRS acknowledged these concerns and the limitations faced in significantly reducing EITC improper payments using the traditional audit process. Consequently, IRS initiated several programs to address EITC improper payments, such as increasing outreach and education to taxpayers and return preparers. Legislative action and significant changes in IRS compliance processes likely would be necessary to make any meaningful reduction in improper payments. Recently, we recommended matters for congressional consideration or executive actions that if effectively implemented, could help to reduce EITC improper payments. Regulating paid tax preparers. In August 2014, IRS reported that 68 percent of all tax returns claiming the EITC in tax years 2006 and 2007 were prepared by paid tax preparers—most of whom were not subject to any IRS regulation—and that from 43–50 percent of the returns overclaimed the credit. Similarly, in our undercover visits to randomly selected tax preparers, a sample that cannot be generalized, we found errors in EITC claims, resulting in significant overstatement of refunds. Establishing requirements for paid tax return preparers could improve the accuracy of the tax returns they prepare. Based in part on our recommendation, in 2010 IRS initiated steps to regulate certain preparers through testing and education requirements. However, the courts ruled that IRS lacked such regulatory authority. Although IRS began a voluntary program to recognize preparers who complete continuing education and testing requirements, mandating these requirements could have a greater impact on tax compliance. In 2014, we suggested that Congress consider granting IRS the authority to regulate paid tax preparers, if it agrees that significant paid preparer errors exist. Accelerating W-2 filing deadlines. IRS estimates that it paid $5.8 billion in fraudulent identity theft refunds during the 2013 filing season. While we do not know the extent to which invalid EITC payments are the result of identity theft, IRS has reported that improper payments are a mix of unintentional mistakes and fraud. A common EITC error is misreporting income. IRS issues most refunds months before receiving and matching information returns, such as the W-2 “Wage and Tax Statement,” to tax returns. Treasury recently proposed to Congress that the W-2 deadlines be moved to January 31 to facilitate the use of earnings information in the detection of noncompliance. In August 2014, we recommended that IRS estimate the cost and benefits of options to implement pre-refund matching using W-2 data. Because any change could impose burdens on employers and taxpayers as well as create additional costs to IRS for systems and process changes, Congress and other stakeholders would need information on this impact to fully assess any potential changes. Broadening math error authority. IRS has statutory authority— called math error authority—to correct certain errors, such as calculation mistakes or omitted or inconsistent entries, during tax return processing of EITC claims. According to the Treasury Inspector General for Tax Administration, IRS has math error authority to address some erroneous claims, but additional authority to systematically disallow certain erroneous EITC claims with unsupported wages could reduce improper payments.proposed expanding IRS authority to permit it to correct errors in cases where information provided by the taxpayer does not match information in government databases among other things. Expanding such authority—which at various times we have suggested Congress consider—could help IRS correct additional errors and avoid burdensome audits and taxpayer penalties. IPERIA is the latest in a series of laws aimed at reducing improper payments. IPERIA directs OMB to annually identify a list of high-priority programs for greater levels of oversight and review, including establishing annual targets and semi-annual or quarterly actions for reducing improper payments. IPERIA also enacted into law a Do Not Pay initiative, elements of which already were being developed under executive branch authority. The Do Not Pay initiative is a web-based, centralized data-matching service that allows agencies to review multiple databases to determine a recipient’s award or payment eligibility prior to making payments. Similarly, the DATA Act calls on Treasury to establish a data analysis center, or to expand an existing service, to provide data, analytic tools, and data-management techniques for preventing or reducing improper payments. Effective implementation of the DATA Act and the use of data analytic tools could help agencies to detect, reduce, and prevent improper payments. In addition to these legislative initiatives, OMB has continued to play a key role in the oversight of government-wide improper payments. OMB established guidance for federal agencies on reporting, reducing, and recovering improper payments as required by IPIA, as amended, and on protecting privacy while reducing improper payments with the Do Not Pay initiative. estimating improper payments directs agencies to report on the causes of improper payments using more detailed categories than previously required, such as program design issues or administrative errors at the federal, state, or local agency level. As we previously reported, detailed analysis of the root causes of improper payments can help agencies to identify and implement targeted corrective actions. Office of Management and Budget, Appendix C to Circular No. A-123, Requirements for Effective Estimation and Remediation of Improper Payments, OMB Memorandum M-15- 02 (Washington, D.C.: Oct. 20, 2014); Revised, Financial Reporting Requirements, OMB Circular No. A-136 (revised 2014); and Protecting Privacy while Reducing Improper Payments with the Do Not Pay Initiative, OMB Memorandum M-13-20 (Washington, D.C.: Aug. 16, 2013). estimates for all of the programs and activities they identified as susceptible to significant improper payments. Specifically, two federal agencies did not report estimated improper payment amounts for four risk-susceptible programs. For example, HHS did not report an improper payment estimate in fiscal year 2014 for its Temporary Assistance for Needy Families (TANF) program, which had program outlays of about $16.3 billion. Furthermore, IPERA established a requirement for agency IGs to report annually on agencies’ compliance with the criteria contained in IPERA. Under OMB implementing guidance, these reports should be completed within 180 days of the publication of the federal agencies’ annual performance and accountability reports (PAR) or agency financial reports (AFR). According to IPERA, if a program is found to be noncompliant in a fiscal year, the agency must submit a plan to Congress describing the actions that the agency will take to bring the program into compliance; for 2 consecutive fiscal years, and if OMB determines that additional funding would help the agency improve, the agency and OMB may take steps to transfer or request additional funding for intensified compliance efforts; and for 3 consecutive years, the agency must submit to Congress a reauthorization proposal for each noncompliant program or activity or any proposed statutory changes the agency deems necessary to bring the program or activity into compliance. In December 2014, we reported on agency compliance with the criteria contained in IPERA for fiscal year 2013, as reported by IGs. We found that the most common instances of noncompliance as reported by the IGs related to two criteria: (1) publishing and meeting improper payment reduction targets and (2) reporting improper payment estimates below 10 percent. For fiscal years 2012 through 2014, we also analyzed IG reports and agency PARs or AFRs and identified five programs with improper payment estimates greater than $1 billion that have been noncompliant with at least one of these criteria for 3 consecutive years, These five programs accounted for approximately as show in table 8.$75.9 billion, or 61 percent of the fiscal year 2014 government-wide improper payment estimate. In addition to the legislative criteria, various IGs reported deficiencies in their most recent annual compliance reports, including risk assessments that may not accurately assess the risk of improper payments and estimation methodologies that may not produce reliable estimates. Similarly, we recently reported on weaknesses in improper payment risk assessments at the Department of Energy and in the estimating methodology for DOD’s TRICARE program. In addition to the challenges that we and the IGs reported, some agencies reported in their fiscal year 2014 AFRs that program design issues could hinder efforts to estimate or recapture improper payments. Coordination with states. HHS cited statutory limitations for its state- administered TANF program, which prohibited it from requiring states to participate in developing an improper payment estimate for the program. Despite these limitations, HHS reported that it had taken actions to assist states in reducing improper payments, such as working with states to analyze noncompliance findings from audits related to TANF and requiring more accurate information about the ways states used TANF block grants. Recovery auditing. USDA reported that section 281 of the Department of Agriculture Reorganization Act of 1994 precluded the use of recovery auditing techniques. Specifically, the agency reported that section 281 provides that 90 days after the decision of a state, a county, or an area committee is final, no action may be taken to recover the amounts found to have been erroneously disbursed as a result of the decision, unless the participant had reason to believe that the decision was erroneous. This statute is commonly referred to as the Finality Rule, and according to USDA, it affects the Farm Service Agency’s ability to recover overpayments. With outlays for major programs, such as Medicare and Medicaid, expected to increase over the next few years, it is critical that actions are taken to reduce improper payments. In addition to agencies’ efforts, legislation, OMB guidance, and auditor oversight of agency spending and related internal controls have been important factors in addressing improper payments. There is considerable opportunity here to achieve cost savings without reducing or detrimentally affecting the valuable programs that serve our citizens. For this reason, we will continue to focus attention on improper payments to assist Congress in ensuring that taxpayer dollars are adequately safeguarded and used for their intended purposes. Chairman Enzi, Ranking Member Sanders, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer questions. For further information on issues of fragmentation, overlap, duplication or cost savings, please contact Orice Williams Brown, Managing Director, Financial Markets and Community Investment, who may be reached at (202) 512-8678 or williamso@gao.gov; or A. Nicole Clowers, Director, Financial Markets and Community Investment, who may be reached at (202) 512-8678 or clowersa@gao.gov. For information on improper payment issues, please contact Beryl H. Davis, Director, Financial Management and Assurance at (202) 512-2623 or davisbh@gao.gov. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Improper Payments: TRICARE Measurement and Reduction Efforts Could Benefit from Adopting Medical Record Reviews. GAO-15-269. Washington, D.C.: February 18, 2015. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Medicaid: Additional Federal Action Needed to Further Improve Third- Party Liability Efforts. GAO-15-208. Washington, D.C.: January 28, 2015. Identity and Tax Fraud: Enhanced Authentication Could Combat Refund Fraud, but IRS Lacks an Estimate of Costs, Benefits and Risks. GAO-15-119. Washington, D.C.: January 20, 2015. Improper Payments: DOE’s Risk Assessments Should Be Strengthened. GAO-15-36. Washington, D.C.: December 23, 2014. Improper Payments: Inspector General Reporting of Agency Compliance under the Improper Payments Elimination and Recovery Act. GAO-15-87R. Washington, D.C.: December 9, 2014. Federal Data Transparency: Effective Implementation of the DATA Act Would Help Address Government-wide Management Challenges and Improve Oversight. GAO-15-241T. Washington, D.C.: December 3, 2014. Identity Theft: Additional Actions Could Help IRS Combat the Large, Evolving Threat of Refund Fraud. GAO-14-633. Washington, D.C.: August 20, 2014. Medicare Program Integrity: Increased Oversight and Guidance Could Improve Effectiveness and Efficiency of Postpayment Claims Reviews. GAO-14-474. Washington, D.C.: July 18, 2014. Improper Payments: Government-Wide Estimates and Reduction Strategies. GAO-14-737T. Washington, D.C.: July 9, 2014. IRS Correspondence Audits: Better Management Could Improve Tax Compliance and Reduce Taxpayer Burden. GAO-14-479. Washington, D.C.: June 5, 2014. Medicaid Program Integrity: Increased Oversight Needed to Ensure Integrity of Growing Managed Care Expenditures. GAO-14-341. Washington, D.C.: May 19, 2014. 2014 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-14-343SP. Washington, D.C.: April 8, 2014. Paid Tax Return Preparers: In a Limited Study, Preparers Made Significant Errors. GAO-14-467T. Washington, D.C.: April 8, 2014. Medicare Information Technology: Centers for Medicare and Medicaid Services Needs to Pursue a Solution for Removing Social Security Numbers from Cards. GAO-13-761. Washington, D.C.: September 10, 2013. Medicare Program Integrity: Increasing Consistency of Contractor Requirements May Improve Administrative Efficiency. GAO-13-522. Washington, D.C.: July 23, 2013. 2013 Annual Report: Actions Needed to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-13-279SP. Washington, D.C.: April 9, 2013. Medicaid Integrity Program: CMS Should Take Steps to Eliminate Duplication and Improve Efficiency. GAO-13-50. Washington, D.C.: November 13, 2012. Medicare Program Integrity: Greater Prepayment Control Efforts Could Increase Savings and Better Ensure Proper Payment. GAO-13-102. Washington, D.C.: November 13, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington D.C.: February 28, 2012. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Paid Tax Return Preparers: In a Limited Study, Chain Preparers Made Significant Errors. GAO-06-563T. Washington, D.C.: April 4, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As the fiscal pressures facing the government continue, so too does the need for executive branch agencies and Congress to improve the efficiency and effectiveness of government programs and activities. Such opportunities exist throughout the government. GAO reports annually to Congress on federal programs, agencies, offices, and initiatives (both within departments and government-wide) that are fragmented, overlapping, or duplicative as well as opportunities for cost savings or enhanced revenues. One area that GAO has highlighted as offering the potential for significant cost savings is improper payments, which are payments that should not have been made or were made in the incorrect amount. This statement discusses the status of (1) actions taken and remaining opportunities to address fragmentation, overlap, and duplication issues, and achieve other financial benefits as identified in GAO's 2011-2014 annual reports; and (2) efforts to address government-wide improper payment issues. GAO reviewed and updated prior work and recommendations on issues of fragmentation, overlap, duplication, cost savings, and improper payments. GAO also reviewed reports of inspectors general and agency financial reports. The executive branch and Congress have made progress in addressing the approximately 440 actions across 180 areas that GAO identified in its past annual reports. These issues span the range of government services and programs, from the Medicare and Medicaid programs to transportation programs to weapon systems acquisitions. As of November 19, 2014, 29 percent of these actions were addressed, 44 percent were partially addressed, and 22 percent were not addressed. Executive branch and congressional efforts to address these actions over the past 4 years resulted in over $20 billion in financial benefits, with about $80 billion more in financial benefits anticipated in future years. Although progress has been made, fully addressing all the remaining actions identified in GAO's annual reports could lead to tens of billions of dollars of additional savings, with significant opportunities for improved efficiencies, cost savings, or revenue enhancements in the areas of defense, information technology, education and training, health care, energy, and tax enforcement. Sustained leadership by Congress and the executive branch is necessary to achieve this goal. Efforts to reduce improper payments could result in significant cost savings. For the first time in recent years, the government-wide improper payment estimate significantly increased—to $124.7 billion in fiscal year 2014, up from $105.8 billion in fiscal year 2013. This increase of almost $19 billion was primarily due to estimates for Medicare, Medicaid, and the Earned Income Tax Credit, which account for over 76 percent of the government-wide estimate. GAO has made numerous recommendations that, if effectively implemented, could improve program management and help reduce improper payments in these programs. Examples include improving the use of prepayment edits in Medicare and requiring states to audit Medicaid payments to and by managed care organizations. Recent laws and guidance have focused attention on the issue of improper payments. For example, the Improper Payments Elimination and Recovery Improvement Act of 2012 enacted into law elements of the Do Not Pay initiative, which is a web-based, centralized data matching service that could help prevent improper payments. However, agencies continue to face challenges, such as statutory limitations and compliance issues, in reducing improper payments. |
VHA’s National Patient Safety Improvement Handbook identifies key staff involved in the RCA process, establishes minimum requirements for conducting RCAs, and outlines the RCA process. VISNs are regional systems of care that oversee the day-to-day functions of VAMCs that are within their network. Each VAMC is assigned to one of VA’s 21 VISNs. Within VHA, NCPS supports the RCA process VHA-wide as part of its broader efforts to reduce and prevent inadvertent harm to patients as a result of their care. NCPS staff categorize and analyze RCA data, and provide training and education for VAMCs on the RCA process. According to VHA policy, NCPS is also responsible for disseminating important information learned from RCAs to VAMCs. NCPS reports to the Assistant Deputy Under Secretary for Health for Quality, Safety, and Value, but also works with other VHA offices, including the Office of the Deputy Under Secretary for Health for Operations and Management, which directs operations at the VISN and VAMC levels. At the VISN level, patient safety officers may provide additional oversight of the RCA process and disseminate information from NCPS to the VAMCs within their networks. Each VAMC has a patient safety manager who facilitates the RCA process at the local level. An RCA may be required by VHA policy if a VAMC’s initial review of an adverse event finds that there is a risk to the safety of veterans, based on the severity of the event and its likelihood of recurrence. VHA requires that each VAMC complete a minimum of eight RCAs each fiscal year, four of which must be on individual adverse events. The other four RCAs can be a combination of individual RCAs and aggregated RCAs, the latter of which review a group of similar adverse events to identify common causes and actions to prevent future occurrences. VHA requires that VAMCs conduct aggregated RCAs on three types of adverse events— falls, adverse drug events, and missing patients—to the extent that they occur in a given year. All RCA-related information is required to be entered into VHA’s centralized RCA reporting system—WebSPOT, a software application within VHA’s Patient Safety Information System. WebSPOT is the means by which RCA information is provided to NCPS and to VISN patient safety officers. Information obtained through the RCA process is protected and confidential, according to federal law, and cannot be used to inform an adverse action or privileging action against a provider. Therefore, the RCA process is referred to as a protected process. VAMCs use the RCA process to examine whether a systems or process issue caused an adverse event. Figure 1 provides an overview of the RCA process at VAMCs. Adverse event occurs. The RCA process at a VAMC begins with the recognition of an adverse event. At the VAMC, the patient safety manager receives information from VAMC staff about an adverse event that occurs at the VAMC. To determine if an RCA is required, the patient safety manager evaluates the event using VHA’s safety assessment code matrix to score the severity of the event and its likelihood of recurrence on a scale of 1 (lowest risk) to 3 (highest risk). As directed by VHA policy, adverse events with a score of 3 always require an RCA. VAMCs have discretion in determining whether to conduct RCAs on adverse events with scores of 1 or 2. VAMC conducts RCA. After determining the need for an RCA, the VAMC director convenes a multidisciplinary team of VAMC staff to identify root causes and actions to be taken with associated outcome measures. VHA policy states that those staff directly involved in the adverse event cannot participate on the RCA team; however, the RCA team may interview these staff as part of its investigation to obtain their perspectives on the event that occurred and suggestions for preventing its recurrence. The RCA team is required to develop a report, which includes a description and flowchart of the adverse event, identifies one or more root causes, and includes actions to be taken with associated outcome measures. Actions describe VAMC-level changes to reduce or eliminate future occurrences of similar adverse events. Each action is also required to have at least one outcome measure—a specific, quantifiable, and time- bound means by which responsible staff can determine the extent to which the action has been taken to address the root cause. For example, in the case of an overdose of an anesthesia medication from a pump that held an unsafe amount of medication, the action might be to use a different type of pump that holds less medication and prevents an accidental overdose; an outcome measure might be to measure patient outcomes 1 year later to ensure that no such overdoses occurred. Leadership reviews/approves. Upon completion of the RCA report, the RCA team presents its findings to VAMC leadership. The completed RCA report is required to be signed by the VAMC director within 45 days of the determination of the need for an RCA.signature is the date the RCA is considered complete. The patient safety manager then submits the completed report to NCPS through WebSPOT. The date of the director’s VAMC implements RCA actions. After an RCA report is submitted to NCPS, patient safety managers follow up with VAMC staff on the implementation of identified actions, and, after implementation, evaluate the effectiveness of those actions in addressing the identified root causes. Patient safety managers also update WebSPOT with the actual implementation date of each action. If a VAMC does not implement an action, the patient safety manager can indicate in WebSPOT that the action was not implemented and the reason why. VAMCs may not implement certain actions identified by the RCA team for several reasons, including funding constraints and other unforeseen complications, like building design limitations. After implementation, patient safety managers update WebSPOT to add any comments associated with implementation, as well as information about the effectiveness of each action in addressing identified root causes on a five-point scale from “much worse” to “much better.” In fiscal year 2014, VAMCs most commonly rated RCA actions as having made the related system or process “better” or “much better.” Upon receipt of a completed RCA report, NCPS staff categorize key aspects, such as the type of adverse event, location of the event, corrective actions, and outcome measures. NCPS staff also categorize RCA actions according to an action strength hierarchy of stronger, intermediate, or weaker. (See table 1 for descriptions of stronger, intermediate, and weaker actions.) NCPS recommends using stronger or intermediate actions to the extent possible to improve the likelihood that actions will remove human error from processes and be more successful in addressing the root causes of an adverse event. About two-thirds (68 percent) of all actions resulting from RCAs in fiscal year 2014 were categorized as stronger or intermediate. Total completed RCAs (both individual and aggregated) at VAMCs decreased in each of the past 5 fiscal years. Overall, from fiscal years 2010 through 2014, the total number of RCAs completed at VAMCs decreased by 18 percent—from 1,862 in fiscal year 2010 to 1,523 in fiscal year 2014. (See fig. 2.) Individual RCAs accounted for 88 percent of the decrease during this time period. VHA’s NCPS officials told us they are not certain why the number of completed RCAs has decreased over time, especially in light of an increase in reports of adverse events over the past 5 fiscal years. Specifically, our analysis of adverse event reports in WebSPOT shows that they increased by 7 percent in the past 5 fiscal years (from 109,951 in fiscal year 2010 to 117,136 in fiscal year 2014). An increase in reports does not necessarily mean that there should also be an increase in the number of RCAs conducted, as it is possible that the safety assessment code score was not high enough to require an RCA, giving the VAMC the discretion to address the adverse event through other available processes. However, NCPS officials told us they have not conducted an analysis to determine the contributing factors to the decrease. Without further analysis, it is unclear whether an increase in adverse event reports at the same time that the number of completed RCAs is decreasing is a cause for concern. NCPS’s lack of analysis is not consistent with federal internal control standards, which state that control activities should include comparisons and assessments of different sets of data so that analyses of the relationships can be made and appropriate actions taken. NCPS officials told us they were aware of the decrease in completed RCAs, but have not conducted an analysis of the decrease because it is difficult to determine causal relationships between many possible contributing factors. Although they have not conducted an analysis, NCPS officials suggested possible contributing factors to the decrease in completed RCAs, including: (1) a change in the culture of safety at VAMCs; (2) VAMCs using alternative processes to address adverse events in place of RCAs; and (3) an increasing number of VAMCs conducting the minimum of four individual RCAs each fiscal year. Change in the culture of safety at VAMCs. NCPS officials stated that they have observed a change in the culture of safety in recent years in which staff feel less comfortable reporting adverse Officials added that this change events than they did previously.is reflected in NCPS’s periodic survey on staff perceptions of safety; specifically, 2014 scores showed decreases from 2011 on questions measuring staff’s overall perception of patient safety, as well as decreases in perceptions of the extent to which staff work in an environment with a nonpunitive response to error. As previously noted, however, the number of adverse event reports has been increasing, despite NCPS officials’ observation of a change in the culture of safety. VAMCs’ use of alternative processes. NCPS officials told us that VAMCs sometimes choose alternative processes, such as those based on Lean methods, to address adverse events when an RCA is not required. However, VHA is unaware how many VAMCs use these alternative processes. From fiscal year 2009 through fiscal year 2014, VHA trained over 20,000 staff on the use of Lean methods, but an official from the VA Center for Applied Systems Engineering—the VHA office that conducted the trainings—told us VHA has not conducted any follow-up to determine how these methods are being applied at VAMCs. The official added that, after training, it is up to VAMC leadership to implement Lean methods in their VAMCs, and that the Center for Applied Systems Engineering began working with NCPS about a year ago to begin aligning the RCA process with Lean methods. The lack of follow-up on the use of alternative processes is not consistent with standards for internal control. Without information on the extent to which VAMCs are using alternative processes like Lean methods in place of RCAs, NCPS has limited awareness of the extent to which VAMCs are addressing the root causes of adverse events. Three of the four VAMCs in our review completed fewer RCAs in fiscal year 2014 compared to fiscal year 2010. Officials at one of these VAMCs told us the reason they had completed fewer RCAs was because the VAMC director supported the use of a Lean method to understand and act on the root cause of an adverse event when an RCA was not required. Officials at this VAMC also told us that they thought their Lean method was sometimes more appropriate for reviewing low-severity events because it yielded similar results to an RCA and allowed for a broader, more complete view of the issue being examined. NCPS officials told us they support VAMCs’ use of these alternative processes when appropriate, but acknowledged loss of information as the results of these processes are not required to be entered into WebSPOT, or otherwise shared with NCPS. Increasing numbers of VAMCs conducting the minimum of 4 individual RCAs each fiscal year. NCPS officials told us they were aware that by setting a requirement in 2007 that VAMCs conduct a minimum of 4 individual RCAs each fiscal year, VAMCs that had previously completed many more than 4 might decrease the number of individual RCAs they completed over time. Our analysis of RCA data shows that from fiscal years 2010 through 2014, the number of VAMCs completing more than 4 individual RCAs declined by 8 percent (from 135 to 124 VAMCs). In addition, the number of VAMCs completing exactly 4 individual RCAs in this time period more than doubled, from 4 VAMCs in fiscal year 2010 to 10 VAMCs in fiscal year 2014. All 10 of these VAMCs completed more than 4 individual RCAs in fiscal year 2010, with totals ranging from 5 to 14 individual RCAs. Officials stated that the selection of 4 individual RCAs as a minimum (as well as the selection of 8 as a minimum total of individual and aggregated RCAs), was arbitrary but seemed reasonable. They expressed concern that raising the annual individual RCA minimum requirement may result in lower-quality RCAs. Because NCPS has not conducted an analysis to understand the relationship between the decrease in RCAs and possible contributing factors, such as the increase in adverse event reports and use of alternative processes, it is unclear whether the decrease indicates a negative trend in patient safety at VAMCs or a positive one. For example, the decrease can indicate a negative trend of VAMCs not reporting severe adverse events that would require RCAs, or a positive trend reflecting fewer severe adverse events occurring. Moreover, without complete information on the extent to which VAMCs are using alternative processes to address the root causes of adverse events and the results of those processes, NCPS lacks important data that may be helpful in better identifying trends and system-wide patient safety improvement opportunities. NCPS and VISN patient safety officers oversee the RCA process by monitoring each VAMC’s compliance with RCA requirements, including by reviewing RCA information in WebSPOT and conducting site visits. Reviewing RCA information in WebSPOT. NCPS conducts quarterly reviews of RCA information in WebSPOT to monitor VAMCs’ progress toward meeting annual RCA requirements. NCPS monitors, for example, each VAMC’s progress toward completing the required number of individual and aggregated RCAs for the fiscal year. Our analysis of WebSPOT data shows that, from fiscal year 2010 through fiscal year 2014, almost all VAMCs completed the minimum number of RCAs required each year: an average of 98 percent of VAMCs completed four or more individual RCAs, and an average of 96 percent of VAMCs completed eight or more total RCAs. NCPS officials told us that their review of WebSPOT information also provides insight into the effectiveness of a VAMC’s RCA process. NCPS submits quarterly reports of VAMCs’ progress to the Deputy Under Secretary for Health for Operations and Management. NCPS officials told us that when they find that a VAMC has not met the annual requirement for the number of completed RCAs, they may contact the VAMC’s patient safety manager to ask if barriers to the RCA process exist. Officials said that, in one such instance, the patient safety manager at a VAMC that had not completed the required number of RCAs told NCPS that the medical center director was not supportive of the RCA process. According to NCPS officials, in situations such as this they may then contact the VAMC’s leadership to remind them of the importance of completing RCAs and of the benefits to the entire system of having complete information in WebSPOT, and to offer their assistance. VISN patient safety officers we spoke with told us that they also monitor VAMCs’ compliance with RCA requirements through reviews of RCA information in WebSPOT, and by meeting with VAMC patient safety managers. Conducting site visits to VAMCs. NCPS officials said they may conduct a site visit to provide consultation and feedback to a VAMC that appears to be encountering challenges in meeting RCA requirements, such as completing individual RCAs within 45 days. NCPS site visits can also include an examination of other aspects of the RCA process, including reviewing a sample of RCAs to examine the assignment of safety assessment scores, the strength of corrective actions, and the implementation status of the actions. Officials stated that the 12 to 20 site visits they conduct each year are the most valid way for them to verify the implementation of RCA actions because they provide NCPS with the ability to observe implemented activities and the effectiveness of RCA-based improvements. NCPS officials told us that they visit VAMCs at the request of the VAMC director or as participants in a visit made by other VHA offices, including the Deputy Under Secretary for Health for Operations and Management. In addition to NCPS, patient safety officers at three of the four VISNs in our review told us that they also conduct annual site visits to some or all VAMCs in their networks to assess implementation of RCA actions and to consult with VAMC patient safety managers. In addition to monitoring compliance, NCPS uses RCA information to inform system-wide initiatives to improve patient safety. Not all initiatives are based solely on RCAs, but officials told us that RCAs are a contributing factor to NCPS’s larger patient safety improvement efforts. Officials told us that they focus their initiatives on problems that pose the greatest risk to patients or are the most prevalent in VA’s health care system, such as suicide. Officials explained that their choice of which initiative to pursue is determined by what will have the greatest impact on a problem. Examples of NCPS’s initiatives include Patient Safety Alerts and Advisories, topic summaries, and Clinical Team Training. Patient Safety Alerts and Advisories. Patient Safety Alerts and Advisories are urgent notifications sent to VAMCs that contain a description of a safety issue, instructions for implementing actions to prevent recurrence of the problem, and due dates for completion of actions. NCPS officials told us that alerts and advisories can come from several sources, including reports from VAMCs, other VHA offices, and medical device manufacturers. Patient Safety Alerts and Advisories are developed by NCPS and then issued by the VHA Deputy Under Secretary for Health for Operations and Management. For example, VHA issued a Patient Safety Alert after a patient in a VAMC behavioral health unit hanged himself from an air conditioning vent. The RCA team recommended a structural change to the vents to prevent recurrence, which VHA then required to be implemented at all VAMCs. NCPS also tracks the date that VAMCs completed implementation of actions. From fiscal year 2010 through fiscal year 2014, NCPS has developed 57 alerts and 7 advisories. Topic summaries. Officials told us that NCPS may issue an RCA topic summary if they identify a trend in adverse events or RCAs in WebSPOT. An RCA topic summary provides background context for the relevant adverse event, discusses root causes that were identified through the RCAs conducted, and describes corrective actions taken by VAMCs. For example, NCPS officials told us that after their review of RCAs identified a trend in adverse events caused by the misidentification of patients, they determined that system-wide improvements were needed. NCPS prepared topic summaries on misidentification related to specimens and transporting patients, as well as a guidance document on patient wristbands, which included best practices for VAMCs. NCPS officials told us topic summaries are distributed to VAMCs as part of the agenda for monthly conference calls that NCPS conducts with patient safety staff at VAMCs and VISNs, and that they are also made available through NCPS’s internal website and via e-mail. From fiscal year 2010 through fiscal year 2014, NCPS has issued 12 topic summaries. NCPS may also determine the need for a topic summary on the basis of requests for WebSPOT searches from VAMC and VISN patient safety staff interested in knowing whether RCAs have been conducted for similar adverse events at other VAMCs. NCPS officials estimated that they conduct about 200 such searches annually, and that these searches provide VAMC and VISN staff with information on similar adverse events, such as the corrective actions identified at other locations to address the adverse event. According to officials, NCPS may determine through these searches that several locations are encountering similar patient safety issues, prompting the preparation of a topic summary. Clinical Team Training. NCPS implemented Clinical Team Training for surgical teams in 2007 following analysis of RCA information in WebSPOT that found communication failure to be a root cause or contributing factor in 75 percent of the more than 7,000 RCAs reviewed. The objective of Clinical Team Training is to enhance teamwork and overcome obstacles to effective communication across professional boundaries. The training curriculum includes 2 months of preparation by the VAMC; a day- long onsite learning session consisting of lectures, group interaction, and videos; and quarterly interviews of the clinical team to assess training implementation. One study found that surgical mortality decreased 11 percent more in VAMCs that received Clinical Team Training compared to those that had not NCPS officials told us they have expanded Clinical received it. Team Training beyond surgical teams, and have provided this training, for example, to teams in emergency departments, intensive care units, and inpatient behavioral health units. Julia Neily et al., “Association between Implementation of a Medical Team Training Program and Surgical Mortality,” Journal of the American Medical Association, vol. 304, no. 15 (2010). RCAs are an important tool for VAMCs to identify the systems or processes that contributed to an adverse event, and implement actions to address them. They are also an important contributor to NCPS initiatives to improve patient safety across VA’s health care system. It is unclear whether the 18 percent decrease in total RCAs completed from fiscal year 2010 to fiscal year 2014 is a negative trend reflecting less reporting of serious adverse events, or a positive trend reflecting fewer serious adverse events that would require an RCA. VHA has not, as would be consistent with federal internal control standards, conducted an analysis to determine the relationship between data showing a decrease in RCAs and factors that may be contributing to this trend, including VAMCs use of alternative processes, such as Lean methods, when RCAs are not required. Although the choice to use alternative processes may be appropriate, NCPS is not aware of the extent to which these processes are used, the types of events being reviewed, or the changes resulting from them. Without analyzing the reasons for declining RCAs, and understanding the extent that VAMCs use alternative processes and their results, NCPS has limited awareness of what VAMCs are doing to address the root causes of adverse events. Moreover, the lack of complete information may result in missed opportunities to identify needed system-wide patient safety improvements. To ensure that appropriate steps are being taken to address the root causes of adverse events within VHA, the Secretary of Veterans Affairs should direct the Under Secretary for Health to take the following two actions: Conduct an analysis of the declining number of completed RCAs within the VA health care system, including identifying contributing factors, and take appropriate actions to address them. Determine the extent to which VAMCs are using alternative processes to address the root causes of adverse events when an RCA is not required, and collect information from VAMCs on the number and results of those alternative processes. We provided a draft of this report to VA for comment. In its written comments, reproduced in appendix I, VA generally agreed with our conclusions and concurred with our recommendations. In its comments, VA also provided information on an initial analysis it had conducted, as well as its plans for implementing each recommendation, with an estimated completion date of November 2015. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Janina Austin, Assistant Director; Jennie F. Apter; Frederick K. Caison; Christine Davis; Kaitlin McConnell; Vikki L. Porter; Emily Wilson; and Malissa G. Winograd made key contributions to this report. | Adverse events are incidents that pose a risk of injury to a patient as the result of a medical intervention or the lack of an appropriate intervention. VAMCs use the RCA process to identify and evaluate systems or processes that caused an adverse event, recommend changes to prevent the event's recurrence, and determine whether implemented changes were effective. GAO was asked to review VA's processes and procedures for responding to adverse events. In this report, GAO examined (1) the extent to which VAMCs used the RCA process to respond to adverse events and (2) how VHA oversees the RCA process and uses information from the process to make system-wide improvements. To conduct this work, GAO reviewed VHA policy and guidance documents, analyzed VHA data on RCAs completed from fiscal years 2010 through 2014, and interviewed officials from NCPS—the VHA office responsible for monitoring RCA data. GAO also analyzed local RCA data and interviewed officials from four VAMCs selected to provide variation in factors such as complexity and location. To address adverse events, Department of Veterans Affairs (VA) medical centers (VAMC) completed 18 percent fewer root cause analyses (RCA) in fiscal year 2014 compared to fiscal year 2010, and the Veterans Health Administration (VHA) has not analyzed the reasons for the decrease. VHA's National Center for Patient Safety (NCPS) officials told GAO they were aware of the decrease, but were not certain why the number of completed RCAs had decreased over time, especially in light of a 7 percent increase in reports of adverse events over the same time period. NCPS officials suggested several potential factors that could contribute to the decrease, including VAMCs' use of processes other than RCAs to address adverse events. However, NCPS is unaware of how many VAMCs use these other processes or their results. VHA's lack of analysis is inconsistent with federal internal control standards which state that agencies should compare data to analyze relationships and take appropriate actions. Because NCPS has not conducted an analysis of the relationship between the decrease in RCAs and possible contributing factors, it is unclear whether the decrease indicates a negative trend in patient safety at VAMCs or a positive one. In addition, without understanding the extent to which VAMCs use alternative processes and their results, NCPS has limited awareness of what VAMCs are doing to address the root causes of adverse events. NCPS oversees the RCA process by monitoring VAMC compliance, and develops system-wide patient safety initiatives informed by RCA data. NCPS monitors each VAMC's compliance with requirements by reviewing RCA database information and conducting site visits. NCPS uses RCA information to inform system-wide patient safety initiatives, such as Patient Safety Alerts and Advisories—urgent notifications sent to VAMCs that describe a safety issue and include instructions and due dates for implementing actions to prevent recurrence. GAO recommends that VA (1) analyze the declining number of completed RCAs, including identifying the contributing factors and taking appropriate actions, and (2) determine the extent to which VAMCs are using alternative processes to address adverse events, and collect information on their results. VA concurred with GAO's recommendations. |
Climate changes, including rising temperatures and shifting patterns of rainfall, are expected to impact public health across the nation in a variety of ways. Though state and local governments have primary responsibility for protecting the public health in their jurisdictions, the federal government also plays an important role in supporting state and local efforts by, for example, providing health departments with technical support and other resources. The federal government has also taken other targeted actions to help prepare the nation for climate change impacts, such as by issuing a number of orders, actions, and plans to provide state and local decision makers with information they need to manage such impacts, including impacts to public health. Impacts from climate change in the United States have been observed, are projected to continue, and are likely to accelerate over the next several decades, with impacts varying considerably by region, according to assessments by the National Research Council and USGCRP. According to USGCRP’s third NCA, observed impacts in the United States include increases in average temperatures and precipitation, as well as changes to precipitation extremes, with variation across regions (see table 1). These and other climate changes are projected to continue over this century and beyond, according to USGCRP’s third NCA. The magnitude of climate change beyond the next few decades depends primarily on the amount of heat-trapping gasses emitted globally, and how sensitive the Earth’s climate is to those emissions, according to USGCRP’s third NCA. According to USGCRP’s third NCA, climate change is expected to impact human health in the United States by exacerbating some existing health threats and by posing new risks. For example, projected changes in temperature are expected to increase the length of pollen seasons, which could increase allergies and asthma episodes. According to USGCRP’s third NCA, extreme weather events, which are expected to become more common with climate change, are linked with increases in injuries, deaths, and mental health problems, such as anxiety and posttraumatic stress disorder. Furthermore, according to this assessment, changes in the climate may contribute to the spread of vector-borne diseases that are transmitted to humans by animals, including invertebrate animals such as mosquitoes and ticks. Examples of vector-borne diseases that currently pose health risks in some regions of North America include chikungunya virus, dengue, Lyme disease, Rocky Mountain spotted fever, and West Nile virus. Table 2 summarizes these and additional risks that climate change poses to human health. Vector-borne diseases are transmitted by mosquitoes, ticks, and fleas. West Nile virus is one type of vector-borne disease and is most commonly transmitted to people by the bite of infected mosquitoes. First detected in North America in 1999, it has since spread to all states except Alaska and Hawaii, with outbreaks occurring every summer. Most people who are infected with West Nile virus infected people will develop a fever with other symptoms such as headaches and body aches, and a very small number of infected people—less than 1 percent—will develop a severe neurological illness that can result in paralysis or death. People who work or play outside are especially vulnerable because of greater exposures to mosquitoes. Habitats of some pathogen-carrying vectors may expand into previously unaffected regions, in part because of climate change. Heat-related illnesses and deaths may result from heat stroke or heat-sensitive conditions such as cardiovascular disease, kidney disease, cerebrovascular disease, and other conditions exacerbated by exposure to extreme heat. The type and severity of health impacts that communities and individuals face from climate change will depend on a variety of factors, and not everyone is equally at risk, according to assessments by the National Research Council, USGCRP, and others. Populations of special concern include children, the elderly, those who are sick, those who are living in poverty, those who work outdoors, some communities of color, and Native American communities. According to the National Research Council and USGCRP’s third NCA, key factors in determining health risks include the following: Location. Because climate change impacts are expected to vary across the country, people will face different risks depending on where they live, work, and travel. People located in cities may be at increased risk for heat-related illnesses (e.g., heat stroke or heat- sensitive conditions such as cardiovascular disease), for example, because land cover changes associated with urbanization, including increases in the amount of paved areas, can result in higher air temperatures compared to the surrounding rural areas, according to USGCRP’s third NCA. Those who work outdoors, such as farmers, fishermen, firefighters, and utility workers, may be adversely affected by climate impacts because they have more frequent, intense, and longer exposures to the climate than the general public, according to CDC’s website. For example, as extreme weather events such as floods become more frequent and severe, outdoor workers could be at increased risk of traumatic injury. Figure 1 provides examples of potential impacts by region. Why Are Children Especially Vulnerable to Some Health Risks from Climate Change? Children are more vulnerable than adults to some health risks from environmental hazards— including hazards exacerbated by climate change—because of differences in their biology and behavior. Children breathe more air relative to their body mass than adults do and their natural defenses are less developed, which makes them especially vulnerable to health impacts from decreased air quality. Asthma is one of the most common serious chronic diseases among children, and can be aggravated by poor air quality. Children also behave differently from adults, such as by spending more time outdoors and playing closer to the ground, which makes them especially vulnerable to mosquito and tick bites that can cause disease. Age. A person’s age also plays a role in determining his or her vulnerability to health impacts, including those related to climate change, according to USGCRP’s third NCA. For example, children suffer disproportionately from the effects of heat waves and other environmental hazards associated with climate change, according to USGCRP’s third NCA. One reason is that children playing outside during heat waves may not be aware that they need to drink more water as a preventative measure, according to EPA. Older adults are also vulnerable to some climate-related impacts, according to USGCRP’s third NCA. Specifically, this assessment states that limited mobility among older adults can increase their flood-related health risks, and that older people are at much higher risk of dying during extreme heat events. Adaptive capacity. The extent to which people and communities have the capacity to successfully adapt to adverse events also affects the health risks they face, and is influenced by characteristics such as disabilities and socioeconomic status. For example, according to USGCRP’s third NCA, limited economic resources for adapting to or escaping from health-sensitive situations will place the poor at higher risk of health impacts from climate change than higher-income groups. In contrast, communities that have access to early warning systems, such as for forecasting and alerting people about impending heavy precipitation events and flooding, may be better positioned to reduce health risks from such adverse events. State, local, territorial, and tribal governments have primary responsibility for managing public health risks within their jurisdiction. Public health departments vary greatly in their size, responsibilities, and resource levels, among other factors. Activities that public health departments may undertake to help promote health and well-being include monitoring and investigating health problems, educating people about health issues, developing plans to support health efforts, and researching new solutions to health problems, among other activities. The federal government’s role in managing public health includes providing leadership through setting and communicating health-related policies, goals, and standards. Federal agencies also finance research and higher education, support state and local health department efforts, and support the development of data and decision support resources that decision makers can use to manage for risks to public health, including from climate change. For example, through its various programs, CDC provides technical and financial support to state and local health departments to enhance their capacity to monitor and promote public health, including preparing for the risks posed by climate change. The agency’s Climate and Health Program, established in 2009, supports state and local health department efforts to plan for and address the health risks posed by climate change. According to CDC, the program’s three core functions are to translate climate change science into health policy for action by health departments and communities, create decision support resources to build capacity, and serve as a credible leader in planning for the public health impacts of climate change. The federal government has undertaken a number of efforts to enhance the nation’s resilience to climate change impacts, such as strengthening federal agencies’ adaptation planning and providing states and localities with information for managing risks posed by climate change. In July 2014, we reported that investing in resilience—actions to reduce potential future losses rather than waiting for an event to occur and paying for recovery afterward—can reduce the potential impacts of climate-related events. To facilitate federal efforts, the federal government has issued the following orders, actions, and plans: Executive Order 13514. On October 5, 2009, the President issued an executive order calling for federal agencies to participate in the existing interagency Climate Change Adaptation Task Force. Based on the task force’s recommendations, the Council on Environmental Quality within the Executive Office of the President issued implementing instructions for the executive order, directing federal agencies to establish agency climate change adaptation policies, among other things. The President’s Climate Action Plan. In June 2013, the White House published a climate action plan detailing actions that federal agencies would take to prepare the nation for the impacts of climate change, among other goals. Executive Order 13653. On November 1, 2013, the President issued an executive order to help prepare the nation for the impacts of climate change. Among other things, the order called on certain federal agencies to provide information, data, and decision-support tools on climate preparedness and resilience in support of federal, regional, state, local, tribal, and other efforts to prepare for the impacts of climate change. It also established a State, Local, and Tribal Leaders Task Force on Climate Preparedness and Resilience to inform federal efforts to support climate preparedness and resilience. April 2015 Administrative Actions. On April 7, 2015, the White House announced a series of actions that the administration was taking to better understand, communicate, and address the health impacts of climate change, as well as commitments made by private sector entities and institutes of higher learning to further our knowledge in this area. Among other things, the administration expanded the resources available for analyzing the climate change impacts on health. (App. II provides a summary of these actions.) June 2015 Climate and Health Summit and Administrative Actions. On June 18, 2015, the White House hosted a summit on climate change and health, which included the participation of the President, the Surgeon General, and the HHS Assistant Secretary for Preparedness and Response. At the summit, the administration announced a set of actions to protect communities from the health impacts of climate change that cannot be avoided. (App. III provides a summary of these actions.) In addition, USGCRP has undertaken efforts to support scientific research with the goal of improving understanding of and response to climate change and its impacts on the United States. To help address climate change impacts on human health, USGCRP coordinates an Interagency Crosscutting Group on Climate Change and Human Health (CCHHG); officials from CDC, NIH, and NOAA chair this group. The mission of CCHHG is to promote and protect the nation’s public health by leading and coordinating federal scientific activities related to climate change and human health, from basic research through public health practice. CCHHG’s activities include working to address key gaps in understanding of the health–related impacts of climate change and developing informational resources. Federal agencies have undertaken activities to enhance understanding about the risks that climate change poses to public health, including supporting and conducting research on or related to these risks. Agencies have also provided some data and decision support resources, such as guidance and tools, for state and local officials and others to use in examining public health-related risks from climate change and potential actions to address these risks. Agencies have also communicated about such risks through reporting and outreach efforts to public health officials and the general public. To enhance understanding of the risks that climate change poses to public health, many of the federal agencies included in our review have supported and conducted research on or related to these risks. While governmentwide data on funding for such research is not available, NIH, which awards financial assistance for research, reports that it awarded about $6 million to support research on the health impacts of climate change in fiscal year 2014. This amount comprised a relatively small portion—about 0.025 percent—of the approximately $24 billion that NIH awarded for research that year. Some of this research originated from an NIH funding opportunity—a solicitation for exploratory research projects on the health impacts of climate change—through which the agency awarded a total of about $8.3 million for research from fiscal year 2011 through fiscal year 2014 for 21 projects. One of these projects, for example, examined the relationship between climate change and pediatric asthma. NIH officials told us that the agency has not issued additional funding opportunities for research on climate change. These officials said that they hoped researchers who received awards from this opportunity would be better positioned to submit proposals in the future through NIH’s most common submission process, in which researchers submit unsolicited proposals based on the program interests of one or more of the agency’s institutes or centers. In addition to NIH, other federal agencies, such the National Aeronautics and Space Administration (NASA), have also conducted or supported research on or related to the risks that climate change poses to public health, including by making awards for relevant research projects and providing financial support for research teams or postdoctoral students conducting relevant work. For example, NASA made an award to aid in the development of climate change indicators related to heat waves in urban areas through its 2012 research announcement entitled “Research Opportunities in Space and Earth Sciences” that intended to, among other things, facilitate the application of scientific knowledge to management decisions. Indicators developed through this program may be used by health officials to identify urban areas with increased vulnerabilities to health impacts due to a lack of cooling green spaces and other factors. Additionally, EPA sponsored a review of the effects of climate change on the indoor environment and health, which concluded, among other things, that climate change may worsen existing indoor environmental problems that are known to exacerbate illnesses such as asthma and allergies, and create new problems that have adverse health impacts. Officials from some of the federal agencies included in our review told us that while their agencies have conducted or supported research that is useful for advancing understanding of climate change impacts to public health, the link to this topic is often indirect. For example, federally-funded research about health impacts from natural disasters, such as floods, does not always explicitly consider the associated impacts from climate change but can help advance understanding of these impacts, according to federal officials we spoke with. Appendix IV provides additional examples of research on or related to the risks of climate change to public health that some of the federal agencies included in our review have conducted or supported. Federal agencies have provided some data and decision support resources—such as guidance and tools—that state and local officials, and others, can use to examine public health-related risks from climate change and potential actions to address these risks. Table 3 provides examples of these data and decision support resources. Users of these data and decision support resources may include public health officials or decision makers with responsibility for managing systems that are necessary for protecting public health, such as hospital administrators or wastewater management system operators, among others. Other users may include researchers, community organizations, or other interested individuals. The following are two key interagency mechanisms that federal agencies leverage to provide climate and health data and decision support resources to potential users: Data.gov. In April 2015, data.gov, the federal government’s site for open data, launched a theme page on climate and health that provides information about data sources and tools maintained by federal agencies. The site includes links to data on climate, weather, and health from many federal agencies. One data source included on the theme page is CDC’s National Environmental Public Health Tracking Network, which includes heat-related health data, including national data on the number of extreme heat days and future projections of extreme heat. Another data source included on the theme page is NOAA’s Climate Data Online, which includes data on precipitation amounts, as well as a mapping tool and other decision support resources. The U.S. Climate Resilience Toolkit. The toolkit is an online portal that is intended to help individuals, communities (including tribal nations), and others respond to risks from climate change. Human health is one of the topics highlighted in the toolkit. One tool available in the toolkit is the Agency for Toxic Substances and Disease Registry’s Social Vulnerability Index, which uses U.S. Census Bureau variables to identify communities that may need support in preparing for climate-related or other hazards or recovering from disasters, and includes a mapping feature and downloadable data. Another tool available in the toolkit is a software package developed by EPA that allows users to predict levels of disease- causing pathogens at specific beach sites, where outbreaks can result from climate change-related effects such as warming waters and intense precipitation. CDC has also developed some decision support resources intended to help state and local public health officials identify and address risks from climate change. In June 2014, CDC officials authored a journal article which described a five-step risk management framework that the agency developed for health officials to use in preparing for the health effects of climate change. In July 2014, CDC issued a more detailed guide for health departments on how to assess the vulnerability of their constituent populations to the health impacts of climate change—a critical step for health departments in planning for climate change risks to public health, according to CDC. The guide includes a case study detailing how CDC conducted a vulnerability assessment on heat impacts in the state of Georgia. In addition, CDC’s National Environmental Public Health Tracking Program published a communication toolkit in 2012 that focuses on the relationship among climate change, public health, and extreme heat—which is the only climate change-related area for which the tracking network provides data. Among other things, the communication toolkit provides health officials and other potential users with key messages and tips for using social media to communicate effectively about this topic. Federal agencies in our review have communicated information about the risks that climate change poses to public health through reporting and outreach efforts directed at multiple audiences, such as public health officials and the general public. In several key communication efforts, federal agencies have collaborated to report on what is known about these risks. For example, in the third NCA published in May 2014, USGCRP reported on the impacts of climate change on the United States, including risks to human health. The report was the result of a large multiagency effort that included the 13 agencies participating in the USGCRP as well as other agencies that chose to support the production of the report. It includes a chapter on human health—prepared collaboratively by officials from several federal agencies participating in CCHHG, as well as experts from outside the federal government— summarizing what is known about how climate change threatens human health and well-being, populations that are at greatest risk, the extent to which public health actions can help address these risks, and opportunities to improve human health while combating climate change. CCHHG is expanding upon this information in a report that is intended to provide a comprehensive, evidence-based, and, where possible, quantitative estimation of observed and projected climate change related health impacts in the United States. The report, referred to as the USGCRP Climate and Health Assessment, is being developed through a multiagency effort and is expected to be finalized in 2016. The American Public Health Association, with the support of CDC, recently reported on threats that climate change poses to human health, and how state and local public health departments have responded to these threats using CDC resources. In addition to preparing reports, federal agencies have also reported on the risks that climate change poses to public health through their websites and through social media, and federal officials have reported on these risks in peer-reviewed journal articles and other publications. Federal agencies have conducted outreach to inform public health officials, as well as the general public, about the risks that climate change poses to public health. Some ways in which federal agencies have recently done so include the following: In August 2014, HHS held a departmentwide briefing on climate change risks to public health. In December 2014, CDC addressed climate change in its Public Health Grand Rounds—a publicly-available webcast that is intended to foster discussion on major public health issues. In addition, senior officials within HHS and EPA have also conducted outreach on certain occasions about the risks that climate change poses to public health. In April 2015, the Surgeon General spoke publicly about climate change impacts to health following a roundtable discussion on the topic with the President, the EPA Administrator, and others, and also used social media to solicit and respond to questions about health impacts from climate change. The EPA Administrator has also communicated about these risks to a variety of audiences, citing climate change as among the most significant threats to public health. App. V provides additional information about federal activities related to climate change risks to public health. Selected state and local health departments included in our review have used a CDC climate and health award that addresses the risks that climate change poses to public health, as well as other federal resources, to address and plan for the public health risks from climate change. Sixteen state and two local health departments have used awards from CDC’s Climate Ready States and Cities Initiative to address and plan for the risks that climate change poses to public health. The initiative is the federal government’s primary investment in supporting state and local health departments in addressing the risks that climate change poses to public health, and is the only HHS financial resource that has been offered to state and local public health departments that directly targets these risks. In fiscal year 2014, Initiative awards to state and local health departments totaled $3.6 million, with individual awards averaging about $200,000. Figure 2 displays Initiative awardees as of fiscal year 2015. CDC’s Climate and Health Program administers the Climate Ready States and Cities Initiative. Under the initiative, CDC program staff are substantially involved in program activities, above and beyond routine grant monitoring. CDC activities for this program are described as providing ongoing guidance, resources, consultation, technical assistance, and training related to awardee activities. Under the initiative, CDC requires awardees to implement CDC’s Building Resilience Against Climate Effects (BRACE) framework—a five-step risk management process intended to help public health departments identify and prepare for the public health impacts of climate change by, among other things, incorporating atmospheric data and climate projections into public health planning. To implement the framework, CDC requires awardees to work with internal and external stakeholders to forecast climate trends, identify disease risks and vulnerable populations, and develop action plans for addressing these risks, among other things. Figure 3 describes the five steps of the framework and provides examples of activities that can be included in its implementation. CDC officials told us that they developed the BRACE framework to assist state and local health departments in preparing for the public health impacts of climate change. (App. VI provides information on the types of activities undertaken by awardees in implementing the BRACE framework, and app. VII provides information on activities by other selected state and local health departments). Under the initiative, CDC requires awardees to complete a range of activities that complement their work implementing the BRACE framework. For example, CDC requires that awardees take steps to increase awareness among the public and decision makers about the risks that climate change poses to public health, making use of available CDC resources. Awardees provided examples of the types of guidance and support that CDC’s Climate and Health Program has provided to assist them in implementing the BRACE framework and addressing climate and health information needs in their jurisdictions. For example, CDC created guidance for awardees on how to develop a Climate and Health Profile— a report detailing a jurisdiction’s climate-related exposures, health outcomes of concern, and vulnerabilities of certain populations—a required component of the first step of the BRACE framework. CDC officials told us that they plan to issue guidance describing how to approach each step of the BRACE framework. As of June 2015, CDC has issued two guidance documents on the first step of the framework. Awardees also told us about other types of support provided by CDC, including tools that CDC developed in response to awardees’ specific requests for assistance, such as a database of peer-reviewed literature on climate change and public health impacts and a graphic summarizing the impacts of climate change to public health. CDC has also organized several communities of practice among awardees as forums to discuss issues related to the implementation of the BRACE framework. According to CDC officials, some public health departments that have not received the Climate Ready States and Cities Initiative award have expressed interest in implementing the BRACE framework, and such departments are able to do so using the resources available on CDC’s website. Although awardees told us that they are in various stages of implementing the framework, they also noted that they have already observed a variety of benefits from the award. Specifically, awardees we spoke with told us that the CDC award has enabled them to work on climate change and health issues in a formalized way that would otherwise not have been possible given, for example, competing priorities and limited staff time. According to our analysis of awardee reports to CDC, nearly all 18 awardees have created a climate and health program with dedicated staff within their departments to work on this issue, although doing so is not a requirement of the award. Awardees also reported that the program allows them to address specific needs in their jurisdictions and consider certain areas of interest while implementing the framework. For example, some awardees in state health departments told us that they have provided subawards to a small number of local health departments in their jurisdictions to participate in the implementation of the BRACE framework, such as by providing feedback on the usefulness of materials developed by the state, or to develop their own initiatives on or related to climate change and public health. Officials in these states told us that the award allows staff in the local jurisdictions to spend time considering the risks that climate change poses to public health and how these risks may impact existing priorities in their health departments. Additionally, some awardees have identified specific risks that climate change poses to public health and are interested in exploring these issues in further detail. For example, some awardees are examining potential health affects related to climate change impacts on food security—that is, the availability and affordability of nutritious and quality foods. Furthermore, awardees told us that the CDC award has allowed for their programs to build relationships and engage with partners, such as officials from other departments in their jurisdictions or regional partners from federal agencies, in ways that could not be accomplished without such awards or without a formalized climate and health program. For example, officials from the New York City Department of Health and Mental Hygiene told us that through the department’s climate and health program, they worked collaboratively with their regional National Weather Service office to study the appropriateness of the thresholds used to issue heat advisories and warnings. According to these officials, the National Weather Service had been using a Heat Health Watch and Warning System in addition to heat index forecasts to determine when to issue heat advisories and warnings; however, health officials were concerned that this method was not sensitive enough for use in predicting public health outcomes during excessive heat events. The health department completed a retrospective study of heat-related deaths to evaluate metrics that could be used to estimate risks, and it found that maximum heat index is a useful metric for assessing public health risks due to hot weather in New York City. The officials said that they have changed the threshold at which warnings are set as a result of this study, and they have continued to develop a working relationship with the National Weather Service. State and local public health officials we interviewed reported leveraging a variety of other federal resources, including funding and information sources, to address and plan for the risks that climate change poses to public health. In most cases, these federal resources were not specifically designed for addressing and planning for these risks, but the resources could be used in ways that support such efforts. State and local public health officials we interviewed most commonly mentioned leveraging resources provided by CDC’s National Environmental Public Health Tracking Program to support their work in addressing and planning for the public health risks of climate change. CDC’s Tracking Program made awards to 25 states and one city to develop local tracking networks, analyze data on local environmental exposures and related health outcomes, and supply selected data to a national tracking network. The national network includes indicators on climate change, among other environmental hazards, related to extreme heat exposure. State and local health officials we interviewed reported that this award provides an important core source of funding for data infrastructure and environmental health surveillance activities in their jurisdictions, which can be leveraged to include activities such as monitoring patterns of heat-related illness. Awards provided to state and local health departments for the tracking program vary; in fiscal year 2014, awards totaled $22.6 million to participating states and localities, ranging from about $500,000 to $1.2 million, with an average award of about $870,000. State and local health officials also provided examples of how their jurisdictions have leveraged funding resources from other CDC programs to address and plan for the risks that climate change poses to public health. For example, state and local health officials told us that they have leveraged awards from CDC’s Public Health Emergency Preparedness program—which provides state, local, tribal, and territorial health departments across the country with resources to build public health preparedness capabilities—to help consider climate change in emergency preparedness planning or develop systems that can be used to monitor climate-related public health risks. Awards provided to state and local health departments vary; in fiscal year 2014, for example, awards ranged from $325,000 to approximately $42.5 million, with an average award of approximately $9.9 million and a total of about $611.8 million. Climate change is not a specific focus of the program, but CDC officials responsible for administering this program told us that awardees have flexibility in determining how to use the funds while meeting CDC’s requirements. State health officials also told us that they have used awards from CDC’s National Institute for Occupational Safety and Health to consider the impacts of climate change on worker safety and health, such as by monitoring heat-related illnesses and deaths among worker populations or specific industries. Officials from the National Institute for Occupational Safety and Health also noted that, while climate change is not a specific focus of the program, the award could be used to support activities in this area. Some public health officials we interviewed also reported leveraging federal resources from agencies other than from CDC to address or consider the risks that climate change poses to public health. Specifically, officials provided examples of other awards and informational resources that they had used in their work on this issue. For example, an official from one state told us that the state used an award from U.S. Geological Survey to assess the vulnerability and health risks of the state’s watersheds to flooding and drought under a changing climate. Some state and local health officials reported using the health and regional chapters of USGCRP’s third NCA as key sources of information. Some officials also told us that they have relied on a variety of NOAA information resources, such as those provided through NOAA’s National Centers for Environmental Information or through NOAA-funded Regional Integrated Sciences and Assessment teams. For example, one state health official told us that the state has relied on support from a NOAA Regional Integrated Sciences and Assessment team to translate technical information about climate change. In conducting work to address the risks that climate change poses to public health, local health officials from one jurisdiction also provided an example of using demographic information from the American Community Survey—an official U.S. Census Bureau survey that is part of the Decennial Census Program—to identify populations in its locality that are vulnerable to the health-related impacts of climate change. When asked to identify challenges related to their work planning for and addressing the risks that climate change poses to public health, state and local public health officials we interviewed identified challenges that we grouped into the three most frequently reported themes, noting that some of these challenges could be addressed by federal action, while others could not. According to state and local officials, they face challenges communicating about the risks that climate change poses to public health. Officials identified related opportunities for federal agencies to enhance public understanding of these risks. The officials also stated that they face challenges in identifying potential health risks of climate change, for example, as a result of research gaps. Officials said that federal agencies may be able to address this challenge by continuing to advance research and enhance decision support resources. Finally, state and local public health officials said they face other challenges that federal action may not be able to address, such as having insufficient data on health impacts in areas where agreements between states and hospitals limit access by health departments. According to our discussions with selected state and local health officials, officials face challenges communicating about the risks that climate change poses to public health, in part because of limited awareness about climate change as a public health issue. The officials also identified opportunities for federal agencies to address these communication challenges. Selected state and local health officials told us during interviews, site visits, and small group discussion sessions that they face challenges resulting from limited awareness about climate change as a public health issue within their own health departments, among other state and local partners, such as other agencies in their jurisdictions, and among the public. As we have previously found, public awareness can play an important role in the prioritization of climate change adaptation efforts. In addition, if public health officials are aware of the risks that climate change poses to public health, they can better assess the importance of these risks and allocate resources appropriately. However, health department leadership and staff are often not aware of the public health impacts of climate change, or do not understand how the issue fits into the health department’s priorities, according to state and local health officials. This observation is consistent with results from a 2012 NACCHO survey of 174 local health department officials, which showed that just over one-third of respondents thought that other relevant senior managers in their health departments were knowledgeable about the potential public health impacts of climate change. Some local health officials also reported that officials from other sectors of government in their jurisdictions, such as environmental agencies responsible for climate change adaptation planning efforts, have a limited awareness of climate change as a public health issue. This has resulted in health officials having limited involvement in climate change adaptation planning within their localities, according to officials we interviewed. Some officials believe that progress has been made in this area in recent years, but, according to our review of a 2014 analysis of states’ climate change adaptation planning activities, only about one-quarter of all states have incorporated public health considerations into their statewide climate change adaptation plans. In addition, health officials told us that stakeholders and the public have limited awareness about climate change as a public health issue, in part because climate change has historically been framed as an environmental issue. State and local health officials discussed climate change impacts on health as an emerging issue that they became aware of within the last decade, in part as a result of educational efforts of the American Public Health Association. In 2008, for example, the association made climate change a focus of its National Public Health Week, issuing communications to highlight climate change as a public health issue. State and local health officials also told us that they face challenges in communicating or enhancing awareness about projected changes to local climate that can impact public health because of the complexity of the issue. Specifically, officials told us that it is difficult to develop messages about climate change impacts on health because of uncertainties inherent in climate change projections. For example, officials from one state health department told us that the state has faced challenges describing climate projections for its jurisdiction in a way that is accurate but not overly technical, while adequately acknowledging the uncertainties of these projections. State and local health officials also said that it is challenging for them to communicate about the risks that climate change poses to public health because some of the potential effects have not yet been observed in their jurisdictions and, therefore, are not perceived by the public as risks. For example, officials in some states that do not frequently experience heat waves told us that they face challenges convincing their constituents that the risks of heat-related illnesses will increase. Officials told us that they have used a variety of strategies to attempt to interest their constituents and stakeholders in these risks, such as framing the issue as planning for emergency preparedness or severe weather events. However, state and local health officials find it difficult to communicate and bring attention to long-term issues, such as climate change, when there are more immediate public health concerns drawing attention, such as the 2014 Ebola virus outbreak. State and local health officials, as well as representatives from associations representing these officials, identified opportunities for federal action to help address challenges the officials face in communicating about the risks that climate change poses to public health. These opportunities were generally in two areas: enhancing public awareness on climate change as a public health issue, and providing guidance and tools on how to communicate about this issue. Concerning enhancing public awareness, state and local health officials, as well as representatives from associations representing these officials, told us that federal agencies could help address this challenge by taking a sustained leadership role in enhancing public and stakeholder awareness and understanding. Specifically, officials told us that a federally-led public awareness campaign on this issue could assist in informing decision makers and the public. They also said a campaign could help and provide legitimacy to the work of public health officials in addressing and planning for these risks. Officials particularly pointed to the need for a sustained leadership role from HHS and its component agencies, which could draw on the department’s experience from engaging in previous successful public health campaigns, such as the campaign to reduce tobacco use. In a November 2014 report, the President’s State, Local, and Tribal Task Force on Climate Preparedness and Resilience also recommended actions that federal agencies should take to increase climate literacy and public awareness, among other things. These actions included coordinating federal communications on climate change to develop clear, consistent and unified messages, and ensuring that communications resources are accessible to state and local governments. (See app. VII for a summary of the task force’s health-related recommendations.) Federal officials, including those from HHS, told us that they are taking steps to enhance public awareness. For example, as previously mentioned, the White House held a climate change and health summit in June 2015, which included the participation of the U.S. Surgeon General, the HHS Assistant Secretary for Preparedness and Response, and the EPA Administrator. In addition, CDC officials told us that through awards the agency provides, the FrameWorks Institute is developing a series of fact sheets for the general public that explain various impacts of climate change on human health. In its 2014 Climate Adaptation Plan, HHS reported that it considers climate change to be one of the top public health challenges of our time, and it noted that its Office of the Assistant Secretary for Health will develop a climate change communication and outreach strategy to, among other things, promote outreach and awareness among its stakeholders about climate change and its impact on public health. The plan further notes that outreach and communication to at-risk populations will be a significant part of this strategy, and that the department will leverage its comprehensive network of stakeholders involved in the receipt or delivery of health and human services to disseminate climate change and health information. HHS reported in its Strategic Sustainability Performance Plan that the strategy was to be developed by the fall of 2014. In March 2015, a senior HHS official from the HHS component responsible for developing this strategy told us that limited progress had been made in developing the strategy, and that the strategy was not anticipated to be formalized in a written document. However, in July 2015, HHS officials told us that they plan to refine and document the strategy over the next 12 months. Concerning guidance and tools, state and local health officials, as well as a representative from an association representing them told us it would be helpful if federal agencies, including CDC, developed communications guidance on a number of topics. For example, they said they could benefit from guidance on how to frame climate change as a public health issue or communicate based on best practices from the social sciences. They also suggested the development of communication tools, such as talking points or training on how to communicate about this issue. Because protecting public health requires the participation of a variety of stakeholders—including state and local public health departments and other state and local entities—enhancing stakeholder awareness and understanding about climate change as a public health issue could bolster state and local preparedness for the risks that climate change poses. In addition, as noted earlier in this report, enhancing awareness among the public and decision makers about the risks that climate change poses to health is a requirement of the Climate Ready States and Cities Initiative. According to state and local health officials, including Initiative awardees, communication guidance and tools would help them enhance stakeholder awareness and understanding of the risks climate change poses to public health. CDC has developed limited guidance regarding communicating the risks that climate change poses to public health for state and local health officials. CDC has developed a toolkit related to communicating the connections between climate change, extreme heat, and health. However, the content of this document is focused on how awardees and partners of the National Environmental Public Health Tracking Network can use Tracking Network data to communicate about this issue, and is therefore not as applicable to a wider audience. CDC officials were not aware of any other agency guidance documents on communicating about the risks climate change poses to public health. Officials from CDC’s Climate and Health Program acknowledged that a commitment to communicating about the risks that climate change poses to public health is needed, and they identified some actions to help Climate Ready States and Cities Initiative awardees address this challenge, such as reviewing and sharing research findings on best practices for communicating about climate change and providing technical assistance to state and local health departments regarding communication upon their request. However, these officials also told us that they do not currently have plans to develop communication guidance for state and local health departments on how to communicate about climate change because they do not have the resources or capacity to develop such guidance at this time. CDC officials told us that they have been focusing on assisting awardees with resolving methodological and data issues related to implementing the BRACE framework, such as identifying models for use in developing projections of climate change in their jurisdictions. As we noted earlier, the agency’s Climate and Health Program supports state and local health department efforts to plan for and address the health risks posed by climate change. CDC’s website states that the program’s core functions involve translating climate change science to inform health departments and communities, serving as a credible leader in planning for the public health impacts of climate change, and creating decision support resources to assist officials in preparing for climate change. As the administrator of the Climate Ready States and Cities Initiative, CDC is to provide ongoing guidance, resources, and technical assistance, among other things, related to awardee activities. Because CDC requires that awardees of this initiative take steps to enhance public awareness of the risks that climate change poses to human health, developing communications guidance would support their efforts. While CDC’s current resources are focused on addressing methodological and data issues related to the implementation of the BRACE framework, it is also critical for the agency to establish a plan describing when it will develop future communications guidance, to help ensure that health officials have the tools they need to effectively implement the BRACE framework and address required aspects of the award. According to our discussions with selected state and local health officials, they face challenges identifying potential health risks from climate change and have identified related opportunities for federal action. Specifically, officials noted how gaps in research have made it difficult for them to understand and plan for potential health impacts. For example, some state and local public health officials said that limited research has been conducted on how climate change may affect certain aspects of public health, such as the spread of vector-borne diseases, the costs of climate change impacts on human health, and the effectiveness of specific management options. Other officials we interviewed told us that they had difficulty using some of the climate-related data that federal agencies have made available for decision makers. Officials we interviewed explained that they typically do not have the scientific or technical expertise to fully understand or use some federal data, particularly those related to climatology. Public health officials have generally not been trained in using geographic information systems, atmospheric data, and climate projections, according to officials. In addition, conducting analyses with these data can be complex and time-intensive. Consequently, they said that the availability of technical assistance in using such data, as well as in translating the results so that officials can apply them at the local level, has been very helpful. State and local public health officials we interviewed said that federal agencies could help address these challenges by continuing to advance the research they support, and by enhancing decision support resources, including resources to assist decision makers in using federal datasets. State officials noted that an expansion of interagency research opportunities on the public health impacts of climate change, which is a crosscutting research area, could be helpful. State and local public health officials we interviewed said that federal agencies could also enhance the decision support resources they provide, including through technical assistance that would better position public health departments to effectively use available datasets. For example, a Climate Ready States and Cities Initiative awardee said that it would be helpful if CDC could provide awardees with programming language to assist them in using federal datasets to assess health vulnerabilities associated with climate change. Officials from federal agencies told us about actions that they have taken or that they have planned that could help address some of these challenges. Specifically, federal officials have acknowledged the need for additional research on the public health impacts of climate change and have taken some steps to fill those needs. In 2010, an ad hoc interagency working group on climate change and health developed a white paper summarizing research needs on the human health effects of climate change. The intent of this paper was to provide a baseline picture of research needs in this area that agencies could then build upon as new information became available, according to the paper. Subsequently, NIH officials analyzed the agency’s portfolio of research on climate change and health, and summarized their results in an article published in 2013. The article includes a discussion of challenges related to conducting this research and opportunities to advance the research, including opportunities to take a multidisciplinary approach through enhanced interagency research opportunities. More recently, in an April 2015 Federal Register notice, EPA, on behalf of USGCRP, announced that a draft of the CCHHG assessment of climate change impacts on health was available for comment. According to the CCHHG co-chairs, the report contains results that should advance the research in some needed areas, and is likely to include some information on research needs. The CCHHG co-chairs told us that they hope to use information on research needs included in the final assessment to help inform the development of a prioritized and focused research agenda for use in addressing research needs. CCHHG officials also told that they would like to continue to help foster a collaborative interagency approach to researching climate change impacts on health. Federal agencies, such as HHS, EPA, and NOAA, have also recently taken steps to enhance data and decision support resources available to state and local decision makers, for example, by enhancing air quality surveillance and creating a national heat health information system. These enhanced resources may address some needs of state and local decision makers. CCHHG officials said that they have plans to solicit feedback from state and local decision makers about these enhanced decision support resources through town hall meetings and other mechanisms. According to selected state and local public health officials we interviewed, they face a range of other challenges related to planning for and addressing the health impacts of climate change that federal action may not be able to address. For example, they said that insufficient data and inadequate resources impede their ability to address or plan for these risks. While some federal programs collect local data and provide financial resources to selected states and localities, such as CDC’s National Environmental Public Health Tracking Program, the federal government does not collect local data on climate impacts or health outcomes in all locations, and does not make awards to support the climate and health activities of all state and local health departments. State and local public health officials told us that some environmental surveillance data that could help inform research on the health risks of climate change—such as data on air quality, water quality, and pollen— are often not collected by their states and localities given limited surveillance systems. Additionally, state and local officials explained that some health outcome data can be difficult to obtain. State officials explained that their access to health outcome data, such as the medical conditions cited for emergency room visits, is limited by agreements that states have with hospitals regarding the amount and type of information that hospitals will share with state officials. Some state and local officials told us that their health departments did not have dedicated staff or funding to address and plan for climate change impacts or that their staff resources and funding were not sufficient for maintaining the ideal quality or quantity of work in this area. These statements are consistent with findings from NACCHO’s 2012 survey of local health department officials, in which less than 10 percent of respondents said that their health departments had sufficient resources to effectively protect local residents from the health impacts of climate change, and less than 20 percent of respondents said their health departments had sufficient expertise to assess the potential impacts from climate change. Federal agencies have taken steps to enhance understanding about the risks of climate change to public health. They have also supported state and local efforts to address and plan for these risks, in keeping with an executive order that calls on federal agencies to provide them with data, information, and decision support tools on climate preparedness and resilience. Nevertheless, state and local officials face challenges resulting from limited awareness of climate change as a public health issue among their own departments and the public. As we have previously found, public awareness can play an important role in the prioritization of work on climate change. HHS has acknowledged climate change as one of the top public health challenges of our time and is developing a climate change communication and outreach strategy, which has been delayed by over a year, but is expected to be finalized by July 2016. CDC requires public health departments participating in its Climate Ready States and Cities Initiative to take steps to raise public awareness about the risks that climate change poses to public health, and also engage stakeholders in their planning. As the administrator of the Climate Ready States and Cities Initiative, CDC is to provide ongoing guidance, resources, and technical assistance to support state and local health department work on this issue. Although the agency has provided guidance on some topics, such as extreme heat events, it has not provided specific guidance on how public health departments should communicate about the risks that climate change poses to public health. Officials from CDC’s Climate and Health Program acknowledged that a commitment to communicating about the risks that climate change poses to public health is needed. However, the agency does not currently have plans to develop guidance on this topic, as it has been focused on other priorities. Issuing such guidance would also be in line with the core functions of CDC’s Climate and Health Program, which include translating climate change science to inform communities. By developing such guidance, CDC may help public health departments better meet the requirements of the Climate Ready States and Cities Initiative and better position all health departments to make progress in planning for the health impacts of climate change. To enhance HHS’s ability to protect public health from the impacts of climate change, we recommend that the Secretary of HHS direct CDC to develop a plan describing when it will be able to issue climate change communication guidance to state and local health departments, to better position relevant officials to effectively communicate about the risks that climate change poses to public health and address requirements of the Climate Ready States and Cities Initiative. We provided a draft of this product to HHS, EPA, NOAA, USGCRP, the Council on Environmental Quality, the Department of the Interior, and the National Science Foundation for comment. In its written comments, reproduced in appendix IX, HHS stated that CDC generally concurred with our recommendation. CDC noted its plans to develop and issue climate change communication guidance to state and local health departments after HHS finalizes its climate change communication and outreach strategy, which is expected by July 2016. CDC stated that it would use HHS’s strategy to inform their development of guidance, and to build off of the same strategy. CDC said that the agency currently provides support and technical assistance to state and local health departments regarding communication upon their request, and would continue to do so while HHS finalizes its strategy. CDC remarked that the agency is also working with partner and professional organizations to disseminate messages on the health impacts of climate change. We also received technical comments from HHS, EPA, NOAA, USGCRP, the Council on Environmental Quality, the Department of the Interior, and the National Science Foundation, which we incorporated as appropriate As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretaries of Agriculture, Commerce, Defense, Health and Human Services, Homeland Security, the Interior, and the Smithsonian Institution; the Administrators of the Environmental Protection Agency and the National Aeronautics and Space Administration; the Director of the National Science Foundation; the Executive Director of the United States Global Change Research Program; and the Managing Director of the Council on Environmental Quality; as well as other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact J. Alfredo Gómez at (202) 512-3841 or gomezj@gao.gov, or Marcia Crosse at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. We interviewed officials from federal, state, and local government agencies, as well as representatives of stakeholder groups involved in public health and climate change. 1. Baltimore (MD) 2. Benton County (OR) 3. Columbus (OH) 4. Dallas County (TX) 5. Houston (TX) 6. Multnomah County (OR) 7. New York City (NY) 8. Portsmouth (VA) 9. Salt Lake County (UT) 10. San Diego County (CA) 11. San Francisco (CA) 12. San Luis Obispo County (CA) 13. Shelby County (TN) 14. Summit County (OH) 15. Toledo-Lucas County (OH) 16. Washington County (MD) 17. Wicomico County (MD) In April 2015, the administration announced a series of actions and commitments that were intended to enhance the nation’s ability to understand, communicate, and reduce the health impacts of climate change. Specifically, the administration announced the following 12 actions that it plans to take to address this issue: 1. hosting a climate change and health summit at the White House; 2. issuing a report highlighting actions taken by state and local leaders to reduce the impact of climate change on public health; 3. releasing a health care facilities toolkit consisting of fact sheets, checklists, case studies, and other resources to assist local decision makers in promoting resilient health care infrastructure; 4. circulating a draft climate and health assessment report that is intended to synthesize the best available information on the public health impacts of climate change; 5. holding a community, culture, and mental health workshop to identify factors that enhance resilience to climate change; 6. integrating climate considerations into the Department of the Interior’s health and safety policies; 7. hosting a climate and health data challenge, whereby coders, analysts, and researchers will use available government datasets to generate insights into unresolved questions about the health impacts of climate change; 8. offer climate and health data to participants during the national day of civic hacking (June 6, 2015) to encourage participants to develop new climate and health solutions; 9. improving air quality data with the Environmental Protection Agency’s release of six new village green stations that measure air quality and meteorological data; 10. challenging researchers to develop new models to forecast epidemics of dengue and other infectious diseases through the consolidation of federal and nonfederal data sets; 11. awarding prizes to those who have developed predictive modeling capabilities that can assist government and health organizations to predict the spread of chikungunya, a vector-borne disease; and 12. measuring nutrient pollution through a competition whereby federal datasets are leveraged to develop educational and decision support resources. Additionally, the administration announced that it has secured commitments with 14 businesses and other organizations to collect and share data regarding the health impacts of climate change. For example, Microsoft has committed to improve vector-borne disease surveillance capabilities by developing and deploying drones that are capable of collecting large amounts of mosquitos and automatically analyzing them for various pathogens. Finally, the administration announced that a coalition of deans from 30 medical, nursing, and public health schools had committed to train their students to address the health impacts of climate change. In June 2015, the administration announced a set of actions and commitments that were intended to protect our communities from the health impacts of climate change. Specifically, the administration announced the following actions to address this issue: 1. creation of a map tool by the Department of Health and Human Services to improve the ability of health officials and emergency managers to rapidly identify residential areas where people who depend on electricity to power life-critical durable medical equipment live; 2. development of a national integrated heat health information system by the Centers for Disease Control and Prevention and the National Oceanic and Atmospheric Administration, which is intended to provide a suite of decision support resources that better serve public health needs; 3. launch of a climate and health innovation challenge series by the National Institutes of Health and others to promote innovative approaches and highlight technologies available for understanding the health implications of climate change and improving resilience to adverse effects; 4. creation of a climate change impacts subcommittee within the Federal Interagency Working Group on Environmental Justice, and the workgroup’s launch of a climate justice initiative that is focused on incorporating equity into climate adaptation planning; 5. announcement of a local climate and energy webcast series on climate change, heat islands, and public health, to be hosted by the Environmental Protection Agency; 6. plans to highlight examples of policy actions related to children’s health during national Children’s Health Month; 7. a commitment by CDP, a private organization, to release publicly disclosed data from 61 U.S. cities that summarize the climate risks that the cities are facing and the actions they are taking to improve resilience; and 8. an expansion in the number of medical, public health, and nursing schools that have committed to educate and train their students about the risks of climate change to public health. Federal agencies conduct or support research on a range of topics that can enhance understanding of the risks that climate change poses to public health. In conducting our work, we interviewed officials from some federal agencies involved in the United States Global Change Research Program’s Interagency Crosscutting Group on Climate Change and Human Health. Specifically, we spoke with officials from those agencies whose focus was on understanding climate change risks to populations within the United States. Table 4 provides examples of research some of the federal agencies involved in this group have conducted or supported on or related to the risks that climate change poses to public health. Federal agencies have conducted a range of activities related to understanding, communicating, and managing for the public health impacts of climate change. In conducting our work, we interviewed officials from 26 federal agencies, including some of those involved in the United States Global Change Research Program’s Interagency Crosscutting Group on Climate Change and Human Health. Specifically, we interviewed officials from agencies whose focus was on understanding or managing for climate change risks to populations within the United States. We also interviewed officials from the United States Global Change Research Program and the Council on Environmental Quality. Table 5 provides information about activities conducted by these selected agencies. State and local health officials from the 16 state and two local health departments receiving awards through the Center for Disease Control and Prevention’s (CDC) Climate Ready States and Cities Initiative told us that they have conducted a variety of activities to address and plan for the risks that climate change poses to public health. These activities support awardees’ efforts to implement CDC’s Building Resilience Against Climate Effects (BRACE) framework—a five-step risk management approach intended to help public health departments identify and prepare for the public health impacts of climate change by, among other things, incorporating atmospheric data and climate projections into public health planning. Examples of these activities include the following: Developing community vulnerability and resilience indicators. Awardees have developed indicators to measure community vulnerability and resilience. Florida, for example, developed indicators to identify relationships between measures of social and medical vulnerabilities, such as age and access to health care facilities, with risks to climate-related hazards in the state including hurricane winds and wildland fire. Officials from the city of San Francisco also created a series of community resilience indicators to provide quantitative measurements of vulnerability to climate change stressors in the city, by neighborhood. Indicators were developed in a variety of categories, including in areas related to the environment, health, housing, and the economy, and then mapped by census tract. See figure 4 for an example of a map developed to display areas of vulnerability to extreme heat based on the percentage of buildings with air-conditioning. Enhancing surveillance. Some awardees described taking efforts to enhance the amount and type of surveillance data they collect. For example, Vermont developed Web-based tools for the public to report surveillance information related to the spread of ticks and algal blooms. Specifically, through its interactive tick tracker, the public can share information about where and what kinds of ticks have been observed, in order to better prevent the occurrence of tick bites in others. Incorporating climate change into emergency preparedness planning. Awardees told us that they have incorporated climate change considerations into emergency preparedness planning, such as by developing plans for hazards related to climate change, which can include heat waves or other extreme weather events. For example, Arizona created outreach materials on extreme heat and flooding emergencies to support local health departments. In addition, the state provided technical assistance to a county to create an extreme weather response plan, which included responding to extreme heat events. Illinois also incorporated climate change considerations into emergency preparedness planning by requiring that local health departments demonstrate that they are planning for an increase in the frequency and severity of extreme weather events in order to obtain public health emergency preparedness funding from the state. Developing communication materials. Awardees have developed communication materials to share information on the risks that climate change poses to public health in their jurisdictions, such as through websites for their Climate and Health programs. Officials from one county receiving funds through California’s Climate and Health Program told us that its jurisdiction developed a climate change communications campaign for the purpose of educating health department staff and the community. As part of this effort, county health officials developed public service announcements to air on county-based radio stations—in English and in Spanish—that explain climate change and health-related topics, such as food, transportation, and energy. Engaging communities. Awardees described efforts to engage communities, particularly those vulnerable to the risks that climate change poses to public health. For example, officials from New York City described conducting focus groups with seniors and their caregivers to obtain their perspectives on awareness of heat warnings, prevention behaviors, and air-conditioning prevalence and use during heat waves. Officials from San Francisco also reported holding town hall style meetings with community groups from selected neighborhoods to discuss issues related to climate and health, including heat, sea level rise, and vulnerable populations in these specific areas. Partnering with academic institutions. Awardees reported forming partnerships with academic institutions to, among other things, develop vulnerability assessments and translate climate science information. In some cases, these partnerships were the result of the health department providing a small amount of funding to the university to conduct these assessments, according to officials. For example, officials from Maryland told us that they are collaborating with researchers at the University of Maryland to develop vulnerability assessments and develop county-level projections of the burden of disease from climate change on particular health impacts, such as asthma and waterborne illness. Maryland officials noted the importance of partnering with the University to conduct this work, given the limited resources of the health department and difficulties associated with hiring staff to conduct this assessment. Collaborating with other state or local entities. Awardees have developed relationships with other state or local departments in conducting activities related to addressing and preparing for the risks that climate change poses to public health. Awardees mentioned that these partnerships help others gain a better perspective of the health impacts of climate change, which will help them to consider health impacts as they make decisions in their respective fields. Awardees also noted that they have shared information with partners or worked collaboratively on projects. For example, officials from the Michigan Department of Community Health reported partnering with the state’s Department of Environmental Quality to receive data on topics related to air or water quality. Some awardees also noted the importance of developing partnerships with their state climatologist to provide their health departments with technical assistance in interpreting climate data. Officials from some of 13 state and local health departments that we interviewed that have not received an award through the Centers for Disease Control and Prevention’s (CDC) Climate Ready States and Cities Initiative have conducted some activities related to addressing and planning for the risks that climate change poses to public health, to the extent that they have resources available. Some of these state and local health department officials began working on this issue as a result of receiving leadership direction to do so, whereas others did so in the absence of leadership direction, because they believe it is an important issue or because they have identified the need to acknowledge the issue based on knowledge of climate change risks in their jurisdictions. While some departments have conducted work in this area, others did not have the resources to begin or sustain a robust level of activity. State and local health officials told us that they have few staff to work on such activities, and that those staff also have other assignments and responsibilities. Efforts related to preparing for climate change risks to human health are often conducted in an ad hoc manner compared to those receiving awards through CDC’s Climate Ready States and Cities Initiative. Some officials also told us that they have partnered with or received support from academic institutions and nonprofit organizations, which has been beneficial in beginning their work in this area since they have few resources to devote to this issue within their health departments. State and local health departments not receiving the award that have begun planning for the risks that climate change poses to public health have undertaken activities such as conducting research on health impacts associated with climate change, such as heat-related illness or asthma, participating in workgroups, holding forums to raise awareness, and producing reports. Among the state and local health departments that we interviewed, Alaska and Washington were among those not receiving the CDC initiative award that had undertaken a number of activities directly related to preparing for the public health impacts of climate change. Alaska. Officials from the Alaska Department of Health and the Alaska Native Tribal Health Consortium—a nonprofit tribal health organization—told us that they have engaged in a number of activities to address and plan for the risks that climate change poses to public health, given that Alaska residents are already seeing climate change impacts to their health and livelihoods. The state’s Department of Health is in the early stages of developing a health impact assessment that seeks to identify the potential costs of climate change to health, ways in which to minimize adverse health effects, and ways in which to maximize potential health benefits. The results of this assessment will serve as a baseline for future climate change planning and preparedness activities in the state. Officials from the Department of Health were also involved in studying an outbreak of vibrio parahaemolyticus in July 2004, due, in part, to rising ocean temperatures. The Alaska Native Tribal Health Consortium has also conducted activities related to climate change and health. Through an award provided by HHS’s Indian Health Service, the center has conducted assessments to understand the broad range of community impacts from climate change, including changes in disease, mental health, food and water security, and infrastructure. The center is integrating the results of these assessments into construction design, operations, and maintenance considerations for specific health facility improvements, such as new filtration systems for water treatment plants. The center also developed a Local Environmental Observers network, consisting of tribal environmental, natural resources, and health professionals, to enhance monitoring of unusual events that are climate change-related or climate-sensitive. Observations are mapped on a Google maps platform and are communicated through an electronic newsletter to provide information to decision makers on current risks. Washington. The Washington State Department of Health has recently started to engage in activities related to climate change and health at the direction of its Secretary of Health, who identified climate change as a priority issue. The department is working to identify how to measure and track the impacts of climate change, particularly in the areas of food, water, and air quality, through its Washington Tracking Network, which is funded by CDC’s Environmental Public Health Tracking Network. The department also conducted a survey to characterize local health jurisdiction’s perceptions, activities, and needs related to climate change, and compared the results to similar surveys of local health departments on this topic. In November 2013, the President issued Executive Order 13653, Preparing the United States for the Impacts of Climate Change, which, among other things, established the State, Local, and Tribal Leaders Task Force on Climate Preparedness and Resilience. The mission of the task force was to provide recommendations to the President and interagency council on how the federal government could, among other things, support state, local, and tribal preparedness for and resilience to climate change. The task force issued a report to the President in November 2014, noting that the federal government has an essential and unique role to play in preparing for and responding to climate change impacts. Its report includes 35 recommendations to the President across seven themes, and it also listed suggested actions that federal agencies could take to implement the recommendations. The themes addressed in the report are (1) building resilient communities, (2) improving resilience in the nation’s infrastructure, (3) ensuring resilience of natural resources, (4) preserving human health and supporting resilient populations, (5) supporting climate- smart hazard mitigation and disaster preparedness and recovery, (6) understanding and acting on the economics of resilience, and (7) building capacity. Table 6 summarizes selected health-related recommendations and suggested actions listed across various themes of the report. The task force also developed five overarching principles for all federal agencies to consider as a means to advance climate preparedness and resiliency. These include: 1. Require consideration of climate-related risks and vulnerabilities as part of all federal policies, practices, investments, and regulatory or other programs. 2. Maximize opportunities to take actions that have dual benefits of increasing community resilience and reducing greenhouse gas emissions. 3. Strengthen coordination and partnerships among federal agencies, and across federal, state, local, and tribal jurisdictions and economic sectors. 4. Provide actionable data and information on climate change impacts and related tools and assistance to support decision making. 5. Consult and cooperate with tribes and indigenous communities on all aspects of federal climate preparedness and resilience efforts, and encourage states and local communities to do the same. J. Alfredo Gómez, gomezj@gao.gov or (202) 512-3841. Marcia Crosse, crossem@gao.gov or (202) 512-7114. In addition to the individuals named above, Diane Raynes (Assistant Director), Mark Braza, Emily Hanawalt, Armetha Liles, Krista Mantsch, Cynthia Norris, Patricia Roy, Emily Ryan, Jeanette Soares, Andrew Stavisky, and Jennifer Whitworth made key contributions to this report. | The World Health Organization projects climate change will adversely affect health significantly over the next several decades. Some health effects of climate change are already being felt in the United States, according to assessments by the National Research Council, USGCRP, and others. Since the federal government is the nation’s largest purchaser of health care services, federal health care expenditures could increase in future years due to climate-related impacts. GAO was asked to review federal efforts to increase public health system preparedness for climate change. This report addresses (1) federal activities to enhance understanding about the risks climate change poses to public health, (2) federal resources used by selected states and localities to address these risks, and (3) challenges states and localities face and actions federal agencies could take to mitigate them. GAO examined federal, state, and local documents, and interviewed officials from federal agencies such as CDC, NIH, USGCRP, as well as state and local health departments, including all 18 recipients of CDC’s Climate Ready States and Cities Initiative award. Federal agencies are enhancing understanding of climate-related risks to public health by (1) supporting and conducting research, (2) providing data and informational resources, and (3) communicating about risks. The Department of Health and Human Services’ (HHS) National Institutes of Health (NIH) supports a portfolio of research directly related to these risks. NIH reports awarding about $6 million for such research in fiscal year 2014, including for one study examining health risks posed by heat and air pollution. Federal agencies have also provided data on climate and health issues, such as the number of extreme heat days that state and local officials can use to assess health risks. They have also reported about these risks, such as through the third National Climate Assessment issued in May 2014 by the U.S. Global Change Research Program (USGCRP). Selected state and local health departments have used resources from HHS’s Centers for Disease Control and Prevention (CDC) and other federal agencies to address and plan for the risks of climate change to public health. CDC’s Climate Ready States and Cities Initiative awards an average of about $200,000 per year each to 16 state and two local health departments to implement a risk management framework designed to help incorporate climate projections into public health planning. CDC also requires awardees to increase public awareness of the risks climate change poses to public health. Other federal resources used by health departments to prepare for these risks include funding provided through CDC’s National Environmental Public Health Tracking Program. When asked to identify challenges they face in addressing and planning for the risks of climate change to public health, state and local health officials identified challenges that GAO grouped into the three most frequently mentioned themes. First, the officials said they face challenges communicating about the public health risks of climate change, due to limited public awareness and the complexity of the issue. These officials reported that enhanced federal leadership could help address this challenge. Although HHS plans to develop a climate change communication and outreach strategy, its development has been delayed by over a year. Also, CDC currently does not have plans to issue climate change communications guidance, which state and local officials said would be helpful. CDC’s limited resources are currently focused on resolving methodological and data issues related to its Climate Ready States and Cities Initiative. Given that health departments that have received awards under CDC’s initiative are required to take steps to enhance public awareness, such guidance may help awardees better meet this requirement. Issuing such guidance would also be in line with CDC’s core functions, which include translating climate change science to inform communities. Second, officials said they face challenges identifying health risks of climate change due to gaps in research and difficulties using climate data. Federal officials told GAO about actions they have taken or plan to take that could help address these challenges, such as issuing an assessment of climate change impacts on health, and creating a national heat health information system. Finally, the officials told GAO about other challenges they face that federal action may not be able to address, such as having insufficient local data on health outcomes, because states may not collect or have access to such data, and having insufficient staff resources for these activities. GAO recommends that HHS direct CDC to develop a plan describing when it will be able to issue climate change communications guidance to state and local health departments. CDC generally agreed with the recommendation, stating that it will issue guidance once HHS’s climate change communication and outreach strategy is final. |
In the mid-1990s, the Army Military History Institute began developing proposed legislation for charging and retaining fees to defray costs of providing historical information to the public. The Institute, whose mission is to preserve the Army’s history and ensure access to historical research material, was experiencing a significant increase in requests from the public while resources available to respond were decreasing. For example, the Institute reported that the annual number of requests increased from about 13,000 in 1987 to 20,600 in 1995 and to 35,800 in 2000. During the same period, the number of staff members decreased from over 40 to 33. As a result, backlogs and waiting times increased. The Institute developed and submitted legislative proposals that authorized it to charge and retain fees. In response, Congress enacted Section 1085 of the National Defense Authorization Act for Fiscal Year 2001, which authorizes the charging and retaining of fees by one designated primary archive in each of the four military departments. The four designated archives are the Air Force Historical Research Agency at Maxwell AFB, Alabama; Army Military History Institute at Carlisle Barracks, Pennsylvania; Marine Corps Historical Center at Washington Navy Yard, D.C.; and Naval Historical Center at Washington Navy Yard, D.C. Section 1085 does not specify a fee structure or the fees that are to be charged, but states that fees are not to exceed the costs of providing the information. The Section also states that fees are not to be charged for information that is requested (1) to carry out a duty as a member of the armed forces or employee of the United States or (2) under FOIA, which has a separate fee structure. Prior to the authority granted under Section 1085, DOD, including the military archives, was authorized to charge fees for responding to requests for information under the User Charge Statute and FOIA. Because Section 1085 does not apply to the DOD offices, organizations, museums, and archives other than the four designated archives, the User Charge Statute and FOIA will continue to be the basic authority for these activities to charge fees for providing information. The User Charge Statute is implemented by Office of Management and Budget (OMB) Circular No. A-25, and DOD Financial Management Regulation (FMR), Volume 11A, Chapter 4. DOD’s policy, as stated in Chapter 4, is that when a service is provided that conveys special benefits to recipients, above and beyond those accruing to the public at large, a reasonable charge shall be made to each identifiable recipient. The policy provides that a charge shall be imposed to recover the full cost to the federal government of rendering a service or the fair market value of such service, whichever is higher. Appendix 1 of Chapter 4 lists benefits for which no charge is to be made such as services requested by members of the U.S. Armed Forces in their capacity as service members. Appendix 2, “Schedule of Fees and Rates for Copying, Certifying, and Searching Records Rendered to the Public,” is mandated for use throughout DOD. The Under Secretary of Defense (Comptroller) is responsible for additions or revisions to Chapter 4. FOIA, which specifies processes and procedures for making information available to the public, is implemented in DOD by DOD Regulation 5400.7-R. In accordance with FOIA, DOD Regulation 5400.7-R contains a fee schedule for responding to FOIA requests. For general information, the fees for search, review, and duplication of documents are to be based on direct costs. For technical information, the fees are to be based on all reasonable costs, which is defined as the full costs to the federal government of rendering the service, or fair market value of the service, whichever is higher. The regulation also provides that the first 2 hours of search time and the first 100 pages of duplication shall be provided without charge unless requesters are seeking documents for commercial use, fees will be waived or reduced when the information is likely to contribute significantly to public understanding of DOD, and fees shall be automatically waived when assessable fees total $15 or less. The Directorate for Freedom of Information and Security Review is responsible for the FOIA regulation. FOIA requests are specifically excluded under Section 1085 and the DOD regulations implementing the User Charge Statute. Accordingly, the provisions of Regulation 5400.7-R would determine fees for any FOIA request. However, if a request is not identified as a FOIA request, the fees should be determined under the User Charge Statute as specified in FMR, Volume 11A, Chapter 4. None of the four designated archives has changed its fee structure pursuant to Section 1085. At the time of our work, two of the four archives were taking actions to implement the Section. Officials at the Army Military History Institute were developing fee schedules and planning to implement the Section by October 2001. Officials at the Air Force Historical Research Agency had tasked key stakeholders with determining a fee structure. However, they have not established a target date for implementing the Section. Section 1085 permits each of the four archives to develop its own fee schedule provided that the fees charged do not exceed the costs of providing the information. Officials at the Naval Historical Center and the Marine Corps Historical Center have not decided whether to implement a fee system based on Section 1085 provisions. They have taken no specific actions toward implementation and have received no implementation guidance from their headquarters. One of the factors affecting Section 1085 implementation decisions by the four archives is that they were already authorized under both the User Charge Statute and FOIA to charge for information provided to the public. Based on the statute and regulations, the archives should charge for information provided to public requesters under the User Charge Statute unless the request is identified as a FOIA request. If identified as a FOIA request, any charges should be based on FOIA implementing regulations. However, neither of these statutes authorizes the military archives to retain fees collected in providing general information to the public to defray costs. Fees collected under both the User Charge Statute and FOIA for general information must be deposited in the Treasury as Miscellaneous Receipts. Accordingly, the authority to retain collected fees to defray incurred costs is one significant distinction between Section 1085 and the other two statutes. The Army Military History Institute identified the ability to use collected fees to improve service to the public as the primary reason it developed the legislative proposals that led to Section 1085. Increasing numbers of public requests at a time when budgetary resources were decreasing resulted in the archives developing arrangements to minimize the cost impact of public requests on the archives’ budgets. For example, the Army Military History Institute arranged for a contract, through a nonappropriated fund account, to reproduce requested photographs with fees collected for the photographs reimbursing the fund. The Naval Historical Center refers those requesting its photographs to the Naval Historical Foundation, a nonprofit foundation, which reproduces the photographs and charges the customer. Without such arrangements, the costs of reproducing photographs and responding to requesters would come from the archive’s budget, and the fees collected from the customer would be deposited in Treasury’s Miscellaneous Receipts and would not be available to offset the costs. These arrangements, by reducing budgetary pressures, have lessened the benefits that an archive could achieve from implementing Section 1085. All of the four primary archives charge fees for providing historical information to requesters. However, none of the fees were in accordance with the mandated DOD user fee schedule specified in Appendix 2 of DOD FMR, Volume 11A, Chapter 4. In fact, archive officials told us that they were unaware of the mandated fee schedule. The “Schedule of Fees and Rates for Copying, Certifying, and Searching Records Rendered to the Public” in Appendix 2 establishes a minimum fee of $3.50 for any chargeable case and additional fees for searching and providing copies of various records, photographs, forms, etc. For office copy reproductions, a minimum fee of $3.50 per request (six pages or less) is specified with a charge of $0.10 for each additional page. For photography, the Appendix’s schedule of prices per print is based on the size, type, and quantity ordered. For example, the price per print for an 8- by 10-inch print ranges from $4.50 for one to nine prints to $1.75 for each print in quantities of over 50. The specified charge for clerical search and processing is $13.25 per hour with a minimum charge of $8.30. Existing fees vary significantly among the archives. For example, the charges for a paper copy made by archive staff ranged from no charge by the Air Force Historical Research Agency, to no charge by the Marine Corps Historical Center for the first 100 pages and a charge of $0.15 for each page thereafter, to a charge by the Army Military History Institute of $0.25 per page, to a charge by the Naval Historical Center of $0.30 per page. In general, the archives do not impose a minimum charge for providing information. This could result in requesters receiving copies of documents free or for less than a dollar as opposed to the $3.50 minimum specified in the DOD’s User Charge fee schedule. The Marine Corps fees, which are based on FOIA, resulted in any requester receiving up to 2 hours of search time and 100 pages without a charge. With the exception of the Marine Corps, the archives did not have a clearly identified basis for their fee schedules. The archives also appear to have different practices regarding which requesters are charged and under what circumstances fees will be waived. Archives officials told us that, in many cases, fees are not charged when the request is from military personnel, veterans, or government employees. Under the User Charge regulation, only members of the U.S. Armed Forces, in their capacity as Service members, are exempt from charges. Archive officials also said that fee waivers were used extensively for FOIA requests. DOD’s FOIA regulations provide that the first 100 reproduced images and 2 hours of research are free per request and that fees shall be waived for all requesters when assessable costs for a FOIA request total $15 or less. Further, the regulations provide that documents shall be provided without charge or at reduced charge when a DOD component determines that a waiver or reduction of fees is in the public interest and likely to contribute significantly to public understanding of DOD. DOD last revised its User Charge Statute fee schedule for copying, certifying, and searching records in March 1986. At that time, DOD revised its user fees instruction and added a schedule of fees and rates for services related to copying, certifying, and searching records. The instruction stated that this schedule was to be used for such services throughout DOD. The same fees were included in the DOD FMR, Volume 11A, Chapter 4, Appendix 2, issued in March 1997. Although the Chief Financial Officers Act of 1990 and OMB Circular A-25 require a biennial review of charges for services, DOD Comptroller officials were not aware of any reviews having been done and had no documentation of reviews of the fee schedule for copying, certifying, and searching records. Fees being developed by the Army Military History Institute indicate that the fees mandated in Appendix 2 might be significantly understated. For example, the Institute’s early proposal, based on total direct and indirect costs, shows a total fee of $10.50 for mailing a requester 10 paper copies of an item, itemized as follows. The total fee under Appendix 2 for the same order would be $3.90, itemized as follows. In this case, the fee under Appendix 2 appears to be about one-third of the Institute’s proposal. Comptroller officials noted that Appendix 2 provided for a minimum clerical search and processing charge of $8.30 and that including this minimum clerical charge in the above comparison would result in a higher fee under Appendix 2 than under the Army Military History Institute’s proposal. However, the officials had no information as to whether the minimum clerical charge had been or would be included in a fee involving a request for paper copies of an item. Further, if a search charge is appropriate, the Institute’s proposal includes a $25 hourly research charge as opposed to the $13.25 hourly charge for clerical search and processing under Appendix 2. The Institute’s proposal for five copies of an 8- x 10-inch photograph shows a total fee of $105 (pull fee per item of $5 and $20 for each copy). The total fee under Appendix 2 would be $22.50 ($4.50 per copy) or less than one-fourth of the Institute’s proposal. As with User Charges, DOD fee schedules for charges under FOIA are not current. The FOIA fee schedule for general information, which is to be based on direct cost, has not been updated since 1986. The FOIA fee schedule for technical information, which is to be based on full cost (both direct and indirect costs), was last issued in 1998, but is the same as the schedule first issued in 1986. The collections reported by the four primary military archives are not indicative of potential future collections under updated fee schedules. Military archive officials reported collecting about $81,000 during fiscal year 2000 with the Air Force reporting the most collections (about $46,000) and the Marine Corps reporting the least (about $2,000). However, these amounts are probably much less than amounts that should be collected if updated fee schedules are established and effectively implemented because of the following. Fees charged by the archives are generally less than those in DOD fee schedules even though the fees in the DOD schedules are outdated and could be understated by a factor of three or four. Archives officials said that fees are often waived for military personnel, veterans, government employees, and others although such waivers are not addressed by DOD’s regulations implementing the User Charge Statute. Archive officials state that search fees are not usually charged, which can be a significant element of cost that should be recovered. Arrangements that the archives have used to lessen budgetary impacts, such as the Naval Historical Foundation collecting fees for Naval Historical Center photographs, have reduced reported collections. Further, there are many additional organizations that would have increased collections resulting from updated fee schedules under the User Charge Statute and FOIA. DOD Comptroller officials had no information as to the amount of funds collected throughout DOD using the fee structure mandated in the DOD FMR, Volume 11A, Chapter 4, Appendix 2. They agreed that numerous offices and organizations throughout DOD— some of which have significant numbers of requesters— should use the fee schedule. With regard to FOIA, DOD reported that about $670,000 was recovered through assessed fees in fiscal year 2000, less than 2 percent of the reported $36.5 million in costs associated with providing information under FOIA. If FOIA fees are understated by a significant amount, as appears possible, increases in collections from updated FOIA fee schedules could be significant. Because of DOD’s inconsistent use of authority to charge fees and use of outdated fees schedules, the archives and other providers of public information throughout DOD have not collected a million dollars or more annually in user fees and have treated public requesters inconsistently. DOD, in conjunction with considering implementation of Section 1085, needs to ensure that fees charged to public requesters for information throughout DOD are current and consistent. This is not the situation now because (1) DOD has not revised its fee schedules under the User Charge Statute and FOIA since 1986, (2) the primary military archives are not using the mandated fee schedules, and (3) fees being charged to public requesters vary significantly across these archives. Accordingly, a first step that would precede implementation of Section 1085 is updating the User Charge Statute and FOIA fee schedules. This would assist archives in determining whether to implement Section 1085 and whether an archive that implements Section 1085 needs a separate fee schedule. To provide consistency throughout DOD, an archive implementing Section 1085 could use DOD’s user fee schedule in lieu of establishing a new fee schedule unless specific justification exists for the new schedule. Further, after fee schedules are updated for the User Charge Statute and FOIA, they need to be implemented consistently throughout DOD by all offices and organizations responding to public requesters. Such implementation is necessary for the fair and equitable treatment of the public. We recommend that the Under Secretary of Defense (Comptroller), and the Director, Freedom of Information and Security Review, in conjunction with the secretaries of the military departments and other DOD officials, as appropriate, review and update fee schedules under the User Charge Statute and FOIA; for each archive implementing Section 1085, establish fee schedules that are consistent with the updated fee schedules unless a determination is made that a different fee schedule is justified; and undertake a notification, training, and follow-up effort to ensure that all DOD offices and organizations responding to requesters for information are properly using the updated fee schedules. In written comments on a draft of this report, DOD concurred with the recommendations and commented on actions that have been or are to be taken. With regard to the recommendation to review and update fee schedules under the User Charge Statute, DOD commented that the Office of the Under Secretary of Defense (Comptroller) will work with other organizations to update, as appropriate, and publish a revised fee schedule periodically. With regard to FOIA fee schedules, DOD commented that the Directorate for Freedom of Information and Security Review, which is responsible for those schedules, did not provide comments on the recommendations. With regard to the recommendation that the fee schedule for each archive implementing Section 1085 be consistent with updated user charge fee schedule, DOD commented that fee schedules authorized by Section 1085 are optional. DOD said that the Army Military History Institute, the only archive developing a schedule of charges under Section 1085, would consider, where appropriate, the changes in a revised user charge schedule. With regard to the recommendation to undertake a notification, training, and follow-up effort, DOD commented that the Office of the Under Secretary of Defense (Comptroller) has an established process for making changes to the DOD FMR. It added that DOD audit organizations will be requested to include user fee schedule compliance as a part of their standard reviews, where applicable. Because archive officials were unaware of the FMR fee schedule, we continue to believe that the more substantive actions that we recommended are warranted. We are sending copies of this report to the Office of the Under Secretary of Defense (Comptroller); the Director, Freedom of Information and Security Review; and interested congressional committees. Copies of this report will also be made available to others upon request. Please contact me at (202) 512-9505 if you have any questions. Major contributors to this report were David Childress, Mary Jo Lewnard, and Edda Emmanuelli-Perez. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 (automated answering system). | The National Defense Authorization Act for 2001 authorized the military archives to (1) charge fees to persons requesting information and (2) retain collected fees to help defray costs of providing the information. Although none of the archives has yet implemented a fee, one archive plans to do so by October 2001. The Department of Defense's (DOD) archives and other offices are also authorized under both the User Charge Statute and the Freedom of Information Act (FOIA) to charge for information provided to the public. However, neither of these statutes authorizes an agency to retain those fees. The four designated archives are charging fees to public requesters but are not using the fee schedule mandated by the DOD regulation implementing the User Charge Statute. Similarly, DOD's fee schedules for charges under FOIA are outdated. DOD's inconsistent use of the authority to charge fees and the use of outdated DOD fees schedules result in uncollected fees of a million dollars or more annually and inconsistent handling of public requests for historical information. |
Many foreign physicians who enter U.S. graduate medical education programs do so as participants in the Department of State’s Exchange Visitor Program—an educational and cultural exchange program aimed at increasing mutual understanding between the peoples of the United States and other countries. Participants in the Exchange Visitor Program enter the United States with J-1 visas. Nearly 6,200 foreign physicians with J-1 visas took part in U.S. graduate medical education programs during academic year 2004–05. Physicians participating in graduate medical education on J-1 visas are required to return to their home country or country of last legal residence for at least 2 years before they may apply for an immigrant visa, permanent residence, or certain nonimmigrant work visas. They may, however, obtain a waiver of this requirement from the Department of Homeland Security at the request of a state or federal agency if they have agreed to practice in an underserved area for at least 3 years. States were first authorized to request J-1 visa waivers on behalf of foreign physicians in October 1994. Federal agencies were first authorized to request J-1 visa waivers for physicians in graduate medical education in September 1961. In general, waiver physicians must practice in areas that HHS has designated as underserved. HHS has specified that waiver physicians may practice in HPSAs or medically underserved areas or populations (MUA/P). HPSAs are geographic areas, population groups within areas, or facilities that HHS has designated as having a shortage of health professionals; HPSAs for primary care are generally identified on the basis of the ratio of population to primary care physicians and other factors. MUA/Ps are areas or populations that HHS has designated as having shortages of health care services; these are identified using several factors in addition to the ratio of population to primary care physicians. HPSAs and MUA/Ps can overlap; as a result, a facility can be located in both a HPSA and an MUA/P. States and federal agencies have some discretion in shaping their J-1 visa waiver programs to address particular needs or priorities. For example, while states and federal agencies can request waivers for physicians to work in both primary care and nonprimary care specialties and in a variety of practice settings, they may choose to limit the number of waivers they request for physicians to practice nonprimary care or require that waiver physicians work in certain practice settings. States and federal agencies may also choose to conduct monitoring activities to help ensure that physicians are meeting their waiver agreements—for example, that they are working at the facilities for which their waivers were granted. Although states and federal agencies are generally subject to the same statutory provisions regarding requests for J-1 visa waivers for physicians, there are two notable distinctions. First, states are limited in the number of waivers that may be granted in response to their requests each year. Initially, states were authorized to request waivers for up to 20 physicians each fiscal year; in 2002, the limit was increased to 30 waivers per state per year. Federal agencies are not statutorily limited in the number of waivers that may be granted in response to their requests each year. Second, while federal agencies’ waiver requests must be for physicians to practice in underserved areas, Congress gave states the flexibility, in December 2004, to use up to 5 of their 30 waiver allotments each year for physicians to work in facilities located outside of HHS-designated undeserved areas, provided that the facilities treat patients who reside in underserved areas. We refer to these waivers as “flexible waivers.” Obtaining a J-1 visa waiver at the request of a state or federal agency to practice in an underserved area involves multiple steps (see fig. 1). A physician must submit an application to obtain a case number from the Department of State and must secure a bona fide offer of employment from a health care facility that is located in an underserved area or, in the case of flexible waivers, from a health care facility that treats residents of an underserved area. The physician, the prospective employer, or both apply to a state or federal agency to request a waiver on the physician’s behalf. If, after reviewing the application, the state or federal agency decides to request a waiver, the state or federal agency submits a letter of request to the Department of State affirming that it is in the public interest for the physician to remain in the United States. If the Department of State decides to recommend the waiver, it forwards its recommendation to the Department of Homeland Security’s U.S. Citizenship and Immigration Services (USCIS). USCIS is responsible for making the final determination and notifying the physician when the waiver is granted. According to officials involved in recommending and approving waivers at the Department of State and USCIS, after a review for compliance with statutory requirements and security issues, nearly all waiver requests are recommended and granted. Once the physician is granted the waiver, the employer petitions USCIS for the physician to obtain H-1B status (a nonimmigrant classification used by foreign nationals employed temporarily in a specialty occupation). The physician must work at the facility specified in the waiver application for a minimum of 3 years, unless the physician obtains approval from USCIS to transfer to another facility. USCIS considers transfer requests only in extenuating circumstances, such as closure of the physician’s assigned facility. Once the physician fulfills the employment contract, the physician may apply for permanent residence, continued H-1B status, or other nonimmigrant status, if the physician wishes to remain in the United States. No single federal agency is responsible for managing or tracking the use of J-1 visa waivers for physicians to practice in underserved areas. HHS is the primary federal agency responsible for addressing physician shortages, both in terms of administering NHSC programs that place physicians and other providers in areas experiencing shortages of health professionals and in designating areas as underserved. HHS’s oversight of waiver physicians practicing in underserved areas, however, has generally been limited to the few physicians for whom it has requested J-1 visa waivers. USCIS and the Department of State process J-1 visa waiver requests but do not maintain comprehensive information about waiver physicians’ numbers, practice locations, and practice specialties. States and federal agencies that request waivers maintain such information for the physicians for whom they request waivers, but this information is not centrally collected and maintained by any federal agency. Although the use of J-1 visa waivers has not been systematically tracked, available data indicate that the pool of physicians who could seek waivers—that is, the number of foreign physicians in graduate medical education with J-1 visas—has declined in recent years. In academic year 1996–97, a little more than 11,600 foreign physicians took part in U.S. graduate medical education programs with J-1 visas; by academic year 2004–05 this number had decreased more than 45 percent to slightly less than 6,200. The reasons for this decrease are not completely understood. States and federal agencies reported requesting more than 1,000 J-1 visa waivers in each of fiscal years 2003 through 2005 (see fig. 2). We estimated that, at the end of fiscal year 2005, there were roughly one and a half times as many waiver physicians practicing in underserved areas (3,128) as U.S. physicians practicing in underserved areas through NHSC programs (2,054). In contrast to our findings a decade ago, states have become the primary source of waiver requests for physicians to practice in underserved areas, accounting for 90 percent or more of requests in each of fiscal years 2003 through 2005. The number of states that reported ever having requested a J-1 visa waiver has grown steadily since they were first authorized to do so, from 20 states in fiscal year 1995 to 53 states (all but Puerto Rico) as of fiscal year 2005. States varied, however, in the number of waivers they requested in fiscal years 2003 through 2005. For example, in fiscal year 2005, about one-quarter of the 54 states requested the maximum of 30 waivers, about one-quarter requested 10 or fewer, and two (Puerto Rico and the U.S. Virgin Islands) requested no waivers (see fig. 3). The number of waivers requested by federal agencies has decreased significantly since 1995, with the exit of the two agencies that requested the most waivers for physicians to practice in underserved areas that year. The Department of Agriculture, which stopped requesting waivers for physicians to practice in underserved areas in 2002, and the Department of Housing and Urban Development, which stopped in 1996, together requested more than 1,100 waivers for physicians to practice in 47 states in 1995, providing a significant source of physicians for some states. Federal agencies accounted for about 94 percent of waiver requests that year, in contrast to fiscal year 2005, when federal agencies made about 6 percent of requests. Of the 1,012 waivers requested by states and federal agencies in fiscal year 2005, ARC, DRA, and HHS accounted for 56 requests for physicians to practice in 14 states. States and federal agencies requested waivers for physicians to practice a variety of specialties, with states requesting waivers for physicians to practice both primary and nonprimary care and federal agencies generally focusing on primary care. Although the waivers states and federal agencies requested were for physicians to work in diverse practice settings, most were for physicians to work in hospitals and private practices. These practice settings were about equally divided between rural and nonrural areas. Additionally, less than half of the states opted to request flexible waivers for physicians to work outside of designated underserved areas. Overall, a little less than half (46 percent) of the waivers requested by states and federal agencies in fiscal year 2005 were for physicians to practice exclusively primary care, while a slightly smaller proportion (39 percent) were for physicians to practice exclusively nonprimary care (see fig. 4). A small proportion of waiver requests (5 percent) were for physicians to practice both primary and nonprimary care—for example, for individual physicians to practice both internal medicine and cardiology. An additional 7 percent of waiver requests in fiscal year 2005 were for physicians to practice psychiatry. States and federal agencies differed, however, in the proportion of waivers they requested for physicians to practice primary versus nonprimary care (see fig. 5). Less than 50 percent of the waivers requested by states in fiscal year 2005 were for physicians to practice exclusively primary care, compared with 80 percent of those requested by federal agencies. Nearly all of the states and DRA reported that their fiscal year 2005 policies allowed them to request waivers for physicians to practice nonprimary care. Twenty-seven of these states, however, reported placing some limits on such requests, including limiting the number of requests for physicians to practice nonprimary care or restricting the number of hours a physician could practice a nonprimary care specialty. Even with these limitations, the number of waivers requested for physicians to practice nonprimary care increased among both states and federal agencies over the 3-year period beginning in fiscal year 2003. Overall, requests for physicians to practice exclusively nonprimary care increased from about 300 (28 percent) in fiscal year 2003 to nearly 400 (39 percent) in fiscal year 2005. States and federal agencies reported requesting waivers in fiscal year 2005 for physicians to practice more than 40 nonprimary care specialties (e.g., anesthesiology) and subspecialties (e.g., pediatric cardiology); the most common of these were anesthesiology, cardiology, and pulmonology (the study and treatment of respiratory diseases). Regarding practice settings, more than three-quarters of the waivers requested by states in fiscal year 2005 were for physicians to practice in hospitals (37 percent) and private practices (41 percent) (see fig. 6). In addition, 16 percent were for physicians to practice in federally qualified health centers (facilities that provide primary care services in underserved areas) and rural health clinics (facilities that provide outpatient primary care services in rural areas). Although the largest proportion of waivers that states requested was for physicians to work in private practices, more than 80 percent of the states and all three federal agencies reported that their fiscal year 2005 policies required the facilities where waiver physicians work—regardless of practice setting—to accept some patients who are uninsured or covered by Medicaid. Overall, about half of all waiver requests in fiscal year 2005 were for physicians to practice in areas that respondents considered rural, although the proportions differed between states’ and federal agencies’ requests. States’ waiver requests in fiscal year 2005, which accounted for the vast majority of total requests that year, were about equally divided between those for physicians to work in areas respondents considered rural and those they considered nonrural. Federal agencies’ waiver requests were mostly (93 percent) for physicians to work in areas considered rural (see fig. 7). Most of the waivers requested by states and federal agencies in fiscal year 2005 were for physicians to practice in HPSAs. While federal regulations generally permit states and federal agencies to request waivers for physicians to work in HPSAs or MUA/Ps, about a quarter of the states and two federal agencies (ARC and HHS) had policies in place in fiscal year 2005 that limited at least some types of physicians to practicing in HPSAs. Overall, more than three-quarters (77 percent) of waivers requested by states and federal agencies in fiscal year 2005 were for physicians to work in facilities located in HPSAs, and 16 percent were for physicians to work in facilities located in MUA/Ps that were outside of HPSAs. Additionally, less than half of the states (23 states) reported taking advantage of the option to request flexible waivers—those for physicians to work in facilities that, while located outside of HHS-designated underserved areas, treat patients residing in underserved areas. Requests for flexible waivers in fiscal year 2005, the first year such waivers were allowed, accounted for 7 percent of all waiver requests that year. Most states and federal agencies reported that they conducted monitoring activities to help ensure that physicians were meeting their agreements to work in underserved areas. Although monitoring is not explicitly required of states and federal agencies that request waivers, more than 85 percent of states and two of the three federal agencies that requested waivers in any fiscal year from 2003 through 2005 reported that they conducted at least one monitoring activity in fiscal year 2005. These activities included actions to help determine, for example, whether physicians were working in the locations for which their waivers were requested or whether they were treating the intended patients, such as those who were uninsured or covered by Medicaid. The most common monitoring activity—reported by 40 states, ARC, and DRA—was to require periodic reports from physicians or employers (see fig. 8). For example, some states and federal agencies required written reports submitted once or twice a year that included information such as the number of hours waiver physicians worked or the number of patients for whom Medicaid claims were submitted. States and federal agencies that requested waivers also reported that they monitored waiver physicians through regular communications with employers and physicians, such as through phone calls, and through site visits to waiver physicians’ practice locations. In addition, a small number of states reported conducting other monitoring activities. For example, one state official said the state’s J-1 visa waiver program used Medicaid data to confirm that waiver physicians were treating patients covered by Medicaid. Although most states and federal agencies reported conducting at least one monitoring activity, the number of monitoring activities varied. Ten states and DRA reported conducting at least four different activities, while six states and HHS—together accounting for about 13 percent of waiver requests in fiscal year 2005—reported that they did not conduct any monitoring activities in fiscal year 2005. Four of the six states that reported they did not conduct monitoring activities reported requesting more than 25 waivers in each of fiscal years 2003 through 2005. States and federal agencies reported identifying relatively few incidents in fiscal years 2003 through 2005 in which physicians were not meeting their waiver agreements. These incidents included cases in which the physician was not working in the practice specialty or at the facility specified in his or her waiver agreement, was not seeing the intended patients, or did not serve the entire 3-year employment contract. The most common issue cited was physicians’ transferring to another location or employer without the approval of the state or federal agency that requested their waivers. In addition, several states reported that they had identified cases in which waiver physicians never reported to work. Officials from these states cited examples in which physicians simply failed to appear at the practice sites and did not contact the state that had made the waiver requests on the physicians’ behalf. According to states and federal agencies that reported identifying any incidents, physicians were not solely responsible in all cases in which they did not meet their waiver agreements. Some state officials provided examples of employers who directed physicians to work in locations other than those for which their waivers were requested, including locations outside of underserved areas. States and federal agencies that requested waivers reported that they use a variety of practices to prevent or respond to cases of physicians’ not meeting their waiver agreements (see fig. 9). For example, 38 states and HHS reported that it is their practice to bar employers who are responsible for problems involving waiver physicians from consideration for future J-1 visa waiver physician placements, either temporarily or permanently. Forty states and two federal agencies reported that it is their practice to inform USCIS if they identify physicians who are not meeting their waiver agreements. Physicians not meeting their waiver agreements would again be subject to the 2-year foreign residence requirement and would need to return to their home country or country of last legal residence before they could apply for an immigrant visa, permanent residence, or certain nonimmigrant work visas. USCIS officials said that reports of physicians not meeting their waiver agreements have been relatively rare. Some states and federal agencies that requested waivers also reported that they require physicians’ contracts to stipulate fees to be imposed if the physicians fail to meet their waiver agreements. These requirements include, for example, liquidated damages clauses, which set a particular amount that physicians agree to pay employers if the physicians break their employment contracts. Other practices that states reported included reporting problems with waiver physicians to state medical boards. States cited a number of factors affecting their ability to monitor or take other actions that they believed could help them ensure that physicians meet their waiver agreements. More than one-quarter of the states reported that funding and staffing constraints limited their ability to carry out monitoring activities. For example, four states commented that time and staff constraints limited their ability to conduct visits to physicians’ practice sites. Several states noted that they have little or no authority to take actions that would help ensure that physicians meet their waiver agreements. For example, one state commented that beyond reporting physicians who do not meet their waiver agreements to USCIS, it has no authority over waiver physicians. In addition, a few states noted that their ability to effectively monitor physicians is limited by the fact that they are not notified when USCIS grants waivers or approves transfers. Consequently, states may not know with certainty which physicians USCIS has authorized to work in, or move to or from, their states. One federal agency (ARC) cited two factors that positively affected its ability to help ensure that physicians meet their waiver agreements: the liquidated damages clauses for violating employment agreements that ARC requires to be in physicians’ employment contracts, and site visits by staff of ARC’s Office of Inspector General. According to a senior ARC official, these unannounced visits have occasionally resulted in the discovery of physicians working at sites other than those at which the physicians were authorized to work. The official commented that the visits have also had a deterrent effect. Although the use of J-1 visa waivers remains a major means of providing physicians to practice in underserved areas, HHS does not have the information needed to account for waiver physicians in its efforts to address physician shortages. Without such information, when considering where to place NHSC physicians, HHS has no systematic means of knowing whether the needs of a HPSA are already being met through waiver physicians. Our analysis indicates that some states could have had more waiver and NHSC physicians practicing primary care in HPSAs than HHS identified as needed, while other states were below HHS’s identified need. Although data were not available to determine the number of waiver physicians practicing primary care specifically in HPSAs, our analysis showed that in seven states the estimated number of waiver physicians practicing primary care in all locations (including HPSAs, MUA/Ps, and nondesignated areas), combined with the number of NHSC physicians practicing primary care in HPSAs at the end of fiscal year 2005, exceeded the number of physicians HHS identified as needed to remove the primary care HPSA designations in the state. In six of these seven states, the estimated number of primary care waiver and NHSC physicians exceeded by at least 20 percent the number needed to remove primary care HPSA designations. Meanwhile, in each of 25 states, the estimated number of primary care waiver and NHSC physicians was less than half of the state’s identified need for primary care physicians. The lack of information on waiver physicians could also affect HHS’s efforts to revise how it designates primary care HPSAs and other underserved areas. Multiple federal programs use HHS’s primary care HPSA designation system to allocate resources or provide benefits, but as we have reported, the designation system does not account for all primary care providers practicing in underserved areas, including waiver physicians. Specifically, waiver physicians practicing primary care in an area are not counted in the ratio of population to primary care physicians, one of the factors used to determine whether an area may be designated as a primary care HPSA. HHS has been working on a proposal—in process since 1998—to revise the primary care HPSA designation system, which would, among other things, account for waiver physicians, according to HHS officials. HHS officials acknowledged, however, that the department lacked complete data on waiver physicians, needed to implement such a provision. Recognizing the lack of a comprehensive database with information on J-1 visa waiver physicians and other international medical graduates, HHS in 2003 contracted with ECFMG—the organization that sponsors all foreign physicians with J-1 visas participating in graduate medical education—to assess the feasibility of developing a database that would provide access to information on the U.S. practice locations of, populations served by, and other information about international medical graduates. ECFMG completed the study and in 2004 submitted a draft report to HHS that included recommendations. As of September 2006, a final report had not been published. The use of J-1 visa waivers remains a major means of placing physicians in underserved areas of the United States, supplying even more physicians to these areas than NHSC programs. Although thousands of physicians practice in underserved areas through the use of J-1 visa waivers, comprehensive data on their overall numbers, practice locations, and practice specialties are not routinely collected and maintained by HHS. Only by surveying states and federal agencies that requested waivers were we able to collect information for this report. Having comprehensive data on waiver physicians could help HHS more effectively target the placement of NHSC physicians and implement proposed changes to designating underserved areas. To better account for physicians practicing in underserved areas through the use of J-1 visa waivers, we recommend that the Secretary of Health and Human Services collect and maintain data on waiver physicians— including information on their numbers, practice locations, and practice specialties—and use this information when identifying areas experiencing physician shortages and placing physicians in these areas. We provided a draft copy of this report to the five federal agencies that are involved with waivers for physicians to practice in underserved areas: ARC, DRA, HHS, the Department of Homeland Security, and the Department of State. We received written comments on the draft report from HHS (see app. III). HHS concurred with our recommendation that data should be collected and maintained to track waiver physicians. HHS noted that the department had also discussed, internally, tracking other physicians who are working under H-1B visas, stating that this would allow a more complete accounting of the actual number of physicians providing care in underserved areas. HHS commented that the department’s goal is to assure that the limited resources of the J-1 visa waiver program and other programs addressing areas and populations with limited access to health care professionals are targeted most effectively and that the availability of complete data on these additional providers would enhance the data used to identify such shortage areas. HHS also commented that the draft report may have overstated, to a degree, the “oversupply” of physicians in some states. HHS acknowledged that we made important adjustments in our analysis for physicians practicing nonprimary care and psychiatry. The department, however, expressed concern that our calculations did not address the fact that some J-1 visa waiver placements are not in HPSAs, referring to our finding that 23 percent of waivers requested in fiscal year 2005 were for physicians to practice outside of HPSAs. We believe that applying this percentage to our analysis would be inappropriate for several reasons. First, this percentage pertained to waiver physicians practicing all specialties, including primary care, nonprimary care, and psychiatry, while our analysis focused on physicians practicing primary care. Further, the 23 percent figure represents waivers requested in only one fiscal year (fiscal year 2005), while our analysis covered waivers requested in 3 fiscal years. In addition, fiscal year 2005 was the only year in our analysis in which states could request waivers for physicians to practice in nondesignated areas. In our draft report, we did not use the term “oversupply,” but we acknowledge that our report should clearly specify the limitations in the data used in our analysis. To do so, we clarified the text describing our methodology and results. We also received technical comments from HHS and the Department of Homeland Security’s USCIS, which we incorporated as appropriate. Three agencies—ARC, DRA, and Department of State—said that they did not have comments on the draft report. We are sending copies of this report to the Secretary of Health and Human Services, the Secretary of Homeland Security, the Secretary of State, the Federal Co-chair of ARC, the Federal Co-chairman of DRA, and appropriate congressional committees. We will also provide copies to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (312) 220-7600 or aronovitzl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This appendix presents the following information for each state as of the end of fiscal year 2005: (1) the number of primary care physicians the Department of Health and Human Services (HHS) identified as needed to remove primary care health professional shortage area (HPSA) designations, (2) our estimate of the number of J-1 visa waiver physicians practicing primary care, (3) the number of National Health Service Corps (NHSC) physicians practicing primary care, and (4) primary care waiver and NHSC physicians as a percentage of the HHS-identified need. To determine the need for primary care physicians in each state, we used the number of physicians HHS reported as needed to remove primary care HPSA designations in the state, a measurement used by HHS to identify the need for physicians. Specifically, we used summary data from HHS’s Health Resources and Services Administration on the number of additional full-time equivalent (FTE) primary care physicians needed to remove primary care HPSA designations in the state as of September 30, 2005. HHS determines the number of additional full-time primary care physicians needed to remove primary care HPSA designations for geographic areas, population groups, and facilities. For geographic areas, HHS’s threshold for the ratio of population to primary care physicians is 3,500 to 1 (or 3,000 to 1 under special circumstances); for population groups, it is 3,000 to 1; for facilities that are state or federal correctional institutions, it is 1,000 to 1. In calculating the ratio of population to primary care physicians, HHS does not take into account waiver physicians and most NHSC physicians. In addition to HPSAs, waiver physicians may also practice in designated medically underserved areas or populations (MUA/P). HHS does not, however, have a similar measure of the number of physicians needed in MUA/Ps. To determine the number of NHSC physicians practicing primary care in HPSAs in each state as of September 30, 2005, we used data obtained from the Health Resources and Services Administration on the number of primary care physicians practicing through the NHSC Scholarship, NHSC Loan Repayment, and NHSC Ready Responder programs. NHSC physicians are required to practice in HPSAs. Although data are not available on the number of physicians granted J-1 visa waivers and practicing primary care in underserved areas at any given time, we estimated this number using data on waivers requested by states and by three federal agencies—the Appalachian Regional Commission (ARC), the Delta Regional Authority (DRA), and HHS. We estimated the number of waiver physicians practicing primary care in each state as of September 30, 2005, by using the number of waivers requested in fiscal years 2003 through 2005 for such physicians. This number represents the number of primary care physicians expected to be fulfilling the minimum 3-year employment contract at the end of fiscal year 2005 or who had waivers in process to do so. Our estimate includes all waiver physicians practicing primary care in the state (including those practicing in HPSAs, MUA/Ps, and nondesignated areas). Data were not available to distinguish waiver physicians practicing primary care in HPSAs from those practicing in MUA/Ps or nondesignated areas. Table 1 shows the estimated number of waiver and NHSC physicians practicing primary care at the end of fiscal year 2005 and the number of physicians needed to remove primary care HPSA designations in each state. This appendix summarizes states’ and federal agencies’ responses to selected questions from GAO’s surveys, as well as data obtained from ARC, DRA, and HHS on their waiver requests by state. The following tables present data on the number of waivers states and federal agencies requested in each of fiscal years 2003 through 2005, in total (table 2), by federal agency (table 3), by practice specialty (table 4), and by practice setting (table 5). We also present data on states’ and federal agencies’ policies for requesting waivers (table 6). In addition to the contact named above, Kim Yamane, Assistant Director; Ellen W. Chu; Jill Hodges; Julian Klazkin; Linda Y.A. McIver; and Perry Parsons made key contributions to this report. Health Professional Shortage Areas: Problems Remain with Primary Care Shortage Area Designation System. GAO-07-84. Washington, D.C.: October 24, 2006. Foreign Physicians: Preliminary Findings on the Use of J-1 Visa Waivers to Practice in Underserved Areas. GAO-06-773T. Washington, D.C.: May 18, 2006. State Department: Stronger Action Needed to Improve Oversight and Assess Risks of the Summer Work Travel and Trainee Categories of the Exchange Visitor Program. GAO-06-106. Washington, D.C.: October 14, 2005. Health Workforce: Ensuring Adequate Supply and Distribution Remains Challenging. GAO-01-1042T. Washington, D.C.: August 1, 2001. Health Care Access: Programs for Underserved Populations Could Be Improved. GAO/T-HEHS-00-81. Washington, D.C.: March 23, 2000. Foreign Physicians: Exchange Visitor Program Becoming Major Route to Practicing in U.S. Underserved Areas. GAO/HEHS-97-26. Washington, D.C.: December 30, 1996. Health Care Shortage Areas: Designations Not a Useful Tool for Directing Resources to the Underserved. GAO/HEHS-95-200. Washington, D.C.: September 8, 1995. | Many U.S. communities face difficulties attracting physicians. To address this problem, states and federal agencies have turned to foreign physicians who have just completed graduate medical education in the United States under J-1 visas. Ordinarily, these physicians must return home after completing their programs, but this requirement can be waived at the request of a state or federal agency if the physician agrees to practice in an underserved area. In 1996, GAO reported that J-1 visa waivers had become a major source of physicians for underserved areas but were not well coordinated with Department of Health and Human Services (HHS) programs for addressing physician shortages. GAO was asked to examine (1) the number of waivers requested by states and federal agencies; (2) waiver physicians' practice specialties, settings, and locations; and (3) the extent to which waiver physicians are accounted for in HHS's efforts to address physician shortages. GAO surveyed states and federal agencies about waivers they requested in fiscal years 2003-2005 and reviewed HHS data. The use of J-1 visa waivers remains a major means of providing physicians to practice in underserved areas of the United States. More than 1,000 waivers were requested in each of fiscal years 2003 through 2005 by states and three federal agencies--the Appalachian Regional Commission, the Delta Regional Authority, and HHS. At the end of fiscal year 2005, the estimated number of physicians practicing in underserved areas through J-1 visa waivers exceeded the number practicing there through the National Health Service Corps (NHSC)--HHS's primary mechanism for addressing physician shortages. In contrast to a decade ago, when federal agencies requested the vast majority of waivers, states have become the primary source of J-1 visa waiver requests, accounting for 90 percent or more of waiver requests in fiscal years 2003 through 2005. States and federal agencies requested waivers for physicians to work in a variety of practice specialties, settings, and locations. In fiscal year 2005, a little less than half of the waiver requests were for physicians to practice exclusively primary care. More than three-quarters of the waiver requests were for physicians to work in hospitals or private practices, and about half were for physicians to practice in rural areas. HHS does not have the information needed to account for waiver physicians in its efforts to address physician shortages. Without such information, when considering where to place NHSC physicians, HHS has no systematic means of knowing if an area's needs are already being met by waiver physicians. |
The Recovery Act provides funding to states for restoration, repair, and construction of highways and other activities allowed under the Federal- Aid Highway Surface Transportation Program and for other eligible surface transportation projects. In March 2009, $26.7 billion was apportioned to all 50 states and the District for highway infrastructure and other eligible projects. The Recovery Act requires that 30 percent of these funds be suballocated, primarily based on population, for metropolitan, regional, and local use. Highway funds are apportioned to states through federal-aid highway program mechanisms, and states must follow existing program requirements, which include ensuring the project meets all environmental requirements associated with the National Environmental Policy Act (NEPA), paying a prevailing wage consistent with federal Davis- Bacon Act requirements, complying with goals to ensure disadvantaged businesses are not discriminated against in the awarding of construction contracts, and using American-made iron and steel in accordance with Buy America program requirements. While the maximum federal fund share of highway infrastructure investment projects under the existing federal-aid highway program is generally 80 percent, under the Recovery Act, it is 100 percent. The Recovery Act appropriated $8.4 billion to fund public transit throughout the country mainly through three existing Federal Transit Administration (FTA) grant programs, including the Transit Capital Assistance Program and the Fixed Guideway Infrastructure Investment program. The majority of the public transit funds—$6.9 billion (82 percent)—was apportioned for the Transit Capital Assistance Program, with $6 billion designated for the urbanized area formula grant program and $766 million designated for the nonurbanized area formula grant program. Under the urbanized area formula grant program, Recovery Act funds were apportioned to large and medium urbanized areas—which in some cases include a metropolitan area that spans multiple states— throughout the country according to existing program formulas. Recovery Act funds were also apportioned to states for small urbanized areas and nonurbanized areas under the formula grant programs using the program’s existing formula. Transit Capital Assistance Program funds may be used for such activities as facilities renovation or construction, vehicle replacements, preventive maintenance, and paratransit services. Up to 10 percent of apportioned Recovery Act Transit Capital Assistance funds may also be used for operating expenses. The Fixed Guideway Infrastructure Investment program was appropriated $750 million, of which $742.5 million was apportioned by formula directly to qualifying urbanized areas. The funds may be used for any capital projects to maintain, modernize, or improve fixed guideway systems. The maximum federal fund share for projects under the Recovery Act’s Transit Capital Assistance Program and the Fixed Guideway Infrastructure Investment program is 100 percent; the federal share under the existing programs is generally 80 percent. As they work through the state and regional transportation planning process, designated recipients of funds apportioned for transit—typically public transit agencies and metropolitan planning organizations (MPO)— develop a list of transit projects that project sponsors (typically transit agencies) submit to FTA for Recovery Act funding. FTA reviews the project sponsors’ grant applications to ensure that projects meet eligibility requirements and then obligates Recovery Act funds by approving the grant application. Project sponsors must follow the requirements of the existing programs, which include ensuring the projects funded meet all regulations and guidance pertaining to the Americans with Disabilities Act (ADA), pay a prevailing wage consistent with federal Davis-Bacon Act requirements, and comply with goals to ensure disadvantaged businesses are not discriminated against in the awarding of contracts. Three quarters of Recovery Act funds provided for highway infrastructure investment has been obligated nationwide and in the 16 states and the District that are the focus of our review. For example, as of November 16, 2009, $20.4 billion of the funds had been obligated for just over 8,800 projects nationwide and $4.2 billion had been reimbursed. In the 16 states and the District, $11.9 billion had been obligated for nearly 4,600 projects and $1.9 billion had been reimbursed. Table 1 shows the funds apportioned and obligated nationwide and in selected states as of November 16, 2009. As of November 16, 2009, $4.2 billion had been reimbursed nationwide by the Federal Highway Administration (FHWA), including $1.9 billion reimbursed to the 16 states and the District. These amounts represent 20 percent of the Recovery Act highway funding obligated nationwide and 16 percent of the funding obligated in the 16 states and the District. As we reported in our September report, because it can take 2 or more months for a state to bid and award the work to a contractor and have work begin after funds have been obligated for specific projects, it may take months before states request reimbursement from FHWA. However reimbursements have increased considerably over time, from $10 million in April to $4.2 billion in mid-November. Reimbursements have also increased considerably since we reported in September, when $604 million had been reimbursed to the 16 states and the District and $1.4 billion had been reimbursed nationwide. See figure 1. While reimbursement rates have been increasing, wide differences exist across states. Some differences we observed among the states were related to the complexity of the types of projects states were undertaking and the extent to which projects were being administered by local governments. For example, Illinois and Iowa have the highest reimbursement rates—36 percent and 53 percent of obligations, respectively—far above the national average. Illinois and Iowa also have a far larger percentage of funds devoted to resurfacing projects than other states—as discussed in the next section, resurfacing projects can be quickly obligated and bid. Florida and California have among the lowest reimbursement rates, less than 2 percent and 4 percent of obligations respectively. Florida is using Recovery Act funds for more complex projects, such as constructing new roads and bridges and adding lanes to existing highways. Florida officials also told us that the pace of award ing contracts has been generally slower in areas where large numbers of projects are being administered by local agencies. In California, state officials said that projects administered by local agencies may take longe to reach the reimbursement phase than state projects due to additional steps required to approve local highway projects. For example, highway construction contracts administrated by local agencies in California call for a local public notice and review period, which can add nearly 6 weeks to the process. In addition, California state officials stated that localit tend to seek reimbursement in one lump sum at the end of a project, reimbursement rates not matching levels of which can contribute to ongoing construction. Almost half of Recovery Act highway obligations nationally have bee pavement improvements—including resurfacing, rehabilitating, and reconstructing roadways—consistent with the use of Recovery Act fundin our previous reports. Specifically, $4.5 billion, or 22 percent, is being used for road resurfacing projects, while $5.2 billion, or 26 percent, is being used for reconstructing or rehabilitating deteriorated roads. As we have reported, many state officials told us they selected a large percentag of resurfacing and other pavement improvement projects because those projects did not require extensive environmental clearances, were quick t design, could be quickly obligated and bid, could employ people q uickly, and could be completed within 3 years. In addition to pavement improvement, other projects that have significant funds obligated include pavement widening (reconstruction that includes adding new capa city to existing roads), with $3 billion (15 percent) obligated, and bridge replacement and improvements, with $2 billion (10 percent) obligated. Construction of new roads and bridges accounted for 6 percent and 3 percent of funds obligated, respectively. Figure 2 shows obligations by the types of road and bridge improvements being made. According to California officials, under a state law enacted in March 200 62.5 percent of funds went directly to local governments for projects of their selection, while the remaining 37.5 percent is being used main state highway rehabilitation and maintenance projects that, due to significant funding limitations, would not have otherwise been funded. According to California officials, distributing a majority of funds to localities allows a number of locally important projects to be funded. Mississippi used over half its Recovery Act funds for pavement improvement projects and around 14 percent of funds for pavement widening. The Executive Director of the state transportation department told us the Recovery Act allowed Mississippi to undertake needed projects and to enhance the safety and performance of the state’s highway system. However, the Executive Director also said that the act’s requirements that priority be given to projects that could be completed in 3 years resulted in missed opportunities to address long-term needs, such as upgrading a state roadway to interstate highway standards, that would have likel a more lasting development. impact on Mississippi’s infrastructure and economic In Florida, 36 percent of funds have been obligated for pavement-w projects (compared with 15 percent nationally) and 23 percent for construction of new roads and bridges (compared with 9 percent nationally), while in Ohio, 32 p road and bridge construction. ercent of funds have been obligated for new Pennsylvania targeted Recovery Act funds to reduce the number of structurally deficient bridges in the state. As of October 2009, 31 per of funds in Pennsylvania were obligated for bridge improvement and replacement (compared with 10 percent nationally), in part because a cent significant percentage (about 26 percent, as of 2008) of the state’s bridges are structurally deficient. Massachusetts has used most of its Recovery Act funds to date for pavement improvement projects, including 30 percent of funds for resurfacing projects and 43 percent of funds for reconstructing or rehabilitating deteriorated roads. A Massachusetts official told us that the focus of its projects for reconstructing and rehabilitating roads, as well as the focus of future project selections, is to select projects that promote the state’s broader long-term economic development goals. For example, according to a Massachusetts official, the Fall River development park project supports an economic development project and includes construction of a new highway interchange and new access roadways to a proposed executive park. FHWA officials expressed concern that Massachusetts may be pursuing ambitious projects that run the risk of not meeting Recovery Act requirements that all funds be obligated by March 2010. Recovery Act highway funding is apportioned under the rules governing the Federal-Aid Highway Program generally and its Surface Transportation Program in particular, and states have wide latitude and flexibility in which projects are selected for federal funding. However, the Recovery Act tempers that latitude with requirements that do not exist in the regular program, including the following requirements: States are required to ensure that all apportioned Recovery Act funds— including suballocated funds—are obligated within 1 year (before Mar. 2, 2010). The Secretary of Transportation is to withdraw and redistribute to eligible states any amount that is not obligated within this time frame. Any Recovery Act funds that are withdrawn and redistributed are available for obligation until September 30, 2010. Give priority to projects that can be completed within 3 years and to projects located in economically distressed areas. Distressed areas are defined by the Public Works and Economic Development Act of 1965, as amended. According to this act, to qualify as an economically distressed area, the area must (1) have a per capita income of 80 percent or less of the national average; (2) have an unemployment rate that is, for the most recent 24-month period for which data are available, at least 1 percent greater than the national average unemployment rate; or (3) be an area the Secretary of Commerce determines has experienced or is about to experience a “special need” arising from actual or threatened severe unemployment or economic adjustment problems resulting from severe short- or long-term changes in economic conditions. In response to our recommendation, FHWA, in consultation with the Department of Commerce, issued guidance on August 24, 2009, that provided criteria for states to use for designating “special need” areas for the purpose of Recovery Act funding. Certify that the state will maintain the level of spending for the types of transportation projects funded by the Recovery Act that it planned to spend the day the Recovery Act was enacted. As part of this certification, the governor of each state is required to identify the amount of funds the state plans to expend from state sources from February 17, 2009, through September 30, 2010. The first Recovery Act requirement is that states have to ensure that all apportioned Recovery Act funds—including suballocated funds—are obligated within 1 year. Over 75 percent of apportioned Recovery Act highway funds had been obligated as of November 16, 2009, both nationwide and among the 16 states and the District. Nine states and the District have higher obligation rates than the national average, including Iowa and the District—for which FHWA has obligated 96 percent and 86 percent of funds, respectively. Conversely, Arizona, Massachusetts, Ohio, and Texas have obligation rates of between 52 percent and 62 percent of apportioned funds. Officials at FHWA and state department of transportation officials in the states we reviewed generally believe that these states are on track to meet the March 2010 1-year deadline. However, two factors may affect some states’ ability to meet the 1-year requirement. First, many state and local governments are awarding contracts for less than the original estimated cost. This allows states to use the savings from lower contract awards for other projects, but additional projects funded with deobligated funds must be identified quickly. In order to use the savings resulting from the lower contract awards, a state must request FHWA to deobligate the difference between the official estimate and the contract award amount and then obligate funds for a new project. Our analysis of contract award data shows that for the 10 states and the District, the majority of contracts are being awarded for less than the original cost estimates. While there is a variation in the number of contracts being awarded for lower than their original estimates, every state we collected information from awarded at least half of its contracts for less than the original cost estimates. Some states had an extremely high number of contracts awarded at lower amounts. For example, California, Georgia, and Texas awarded more than 90 percent of their contracts for less than their cost estimates. We also found a significant variation in both the average amount and the range of the savings from contracts awarded at lower amounts. For example, in the District and Georgia, such contracts averaged more than 30 percent less than original state estimates, while in Colorado and Massachusetts, such contracts averaged under 15 percent less than original state estimates. In addition, there is also a significant range in individual projects, with the savings ranging from less then 1 percent under estimates in a number of states to almost 55 percent under estimates in New York and over 90 percent under in Illinois. Federal regulations require states to promptly review and adjust project cost estimates on an ongoing basis and at key decision points, such as when the bid is approved. Many state officials told us that their state has already started the process of ensuring funds are deobligated and obligated to other highway programs and projects by the 1-year deadline. For example, in Colorado, officials are planning to use Recovery Act funds that are being deobligated by FHWA for 5 new projects, while in California, FHWA deobligated approximately $108.5 million and the state has identified 16 new projects for Recovery Act funding. FHWA officials told us they recognize the need to develop a process to monitor and ensure deobligation of Recovery Act funds from known savings before the 1-year deadline. A second factor that may affect some states’ ability to meet the 1-year requirement is that obligations for projects in suballocated areas, while increasing, are generally lagging behind obligations for statewide projects in most states and lagging considerably behind in a few states. In the 16 states and the District, 79 percent of apportioned statewide funds had been obligated as of October 31, 2009, while 65 percent of suballocated funds had been obligated. Figure 3 shows obligations for statewide and suballocated areas in the 16 states and the District. As shown in figure 3, and as we reported in September 2009, FHWA has obligated substantially fewer funds suballocated for metropolitan and local areas in three states. While the national average for obligations of Recovery Act funds for suballocated areas is 63 percent, as of October 31, New Jersey, Massachusetts, and Arizona had obligation rates of 34 percent, 31 percent, and 18 percent of these funds, respectively. Officials in these three states cited a number of reasons for this—including lack of familiarity by local officials with federal requirements and increased staff workload associated with Recovery Act projects—and reported they were taking a number of actions to increase obligations, such as imposing internal deadlines on local governments to identify and submit projects. As of October 2009, Arizona had awarded four contracts (one more than it had as of September 2009) representing $29 million of the $157 million of suballocated funds. This represents 18 percent of suballocated funds—a decline from the 21 percent of suballocated funds that had been obligated when we reported in September 2009. Arizona Department of Transportation officials told us that although one new contract had been awarded, the state’s total obligation of suballocated funds had declined because some suballocated funds were deobligated after more contracts were awarded for less than the estimated amount. Officials also told us that if local governments are not able to advertise contracts for construction in suballocated areas prior to the March 2010 deadline, the state would use Recovery Act funds on “ready-to-go” statewide highway projects in those areas. Similarly, officials in two localities told us that if projects intended for Recovery Act funds were in danger of not having funds obligated by the deadline, they would use those funds on projects now slated to be funded with state dollars and use state funding for other projects. Although states are working to have all of their suballocated funds obligated before March 2010, failure to do so will not prohibit them from participating in the redistribution of Recovery Act funds after March 2, 2010. The Secretary of Transportation is to withdraw highway funds, including suballocated funds, that are not obligated before March 2, 2010. A state that has obligated all of the funds that were apportioned for use by the state (those that were not suballocated) is eligible to participate in this redistribution, regardless of whether all of the state’s suballocated funds have been obligated. FHWA has stated that it is in the process of developing guidance on how the redistribution of any Recovery Act funding that remains unobligated 1 year after apportionment. According to DOT officials, consistent with guidance in the Recovery Act, FHWA currently plans to model this redistribution after the process used each year in the regular federal-aid highway program to redistribute obligation authority, allowing Recovery Act funds redistributed to the states to be available for any qualified project in a state. The second Recovery Act requirement is to give priority to projects located in economically distressed areas. In July and September 2009, we identified substantial variation in the extent to which states prioritized projects in economically distressed areas and how they identified these areas. For example, we found instances of states developing their own eligibility requirements for economically distressed areas using data or criteria not specified in the Public Works and Economic Development Act (Public Works). State officials told us they did so to respond to rapidly changing economic conditions. In response to our recommendation, FHWA, in consultation with the Department of Commerce, issued guidance to the states in August 2009 on identifying and giving priority to economically distressed areas and criteria to identify “special need” economically distressed areas that do not meet the statutory criteria in the Public Works act. In its guidance, FHWA directed states to maintain information as to how they identified, vetted, examined, and selected projects located in economically distressed areas and to provide FHWA’s division offices with documentation that demonstrates satisfaction of the “special need” criteria. FHWA issued additional questions and answers relating to economically distressed areas in November 2009. Widespread designations of special needs areas gives added preference to highway projects for Recovery Act funding; however, they also make it more difficult to target Recovery Act highway funding to areas that have been the most severely impacted by the economic downturn. Three of the states we reviewed—Arizona, California, and Illinois—had each developed and applied its own criteria for identifying economically distressed areas, and in two of the three states, applying the new criteria increased the number of areas considered distressed. In California, the number of counties considered distressed rose from 49 to all 58 counties, while in Illinois, the number of distressed areas increased from 74 to 92 of the state’s 102 counties. All 15 counties in Arizona were considered distressed under the state’s original determination and remained so when the state applied the revised criteria. FHWA officials told us they expected the number of “special need” distressed areas to increase when the new guidance was applied. We plan to continue to monitor the states’ implementation of DOT’s economically distressed area guidance. The third Recovery Act requirement is for states to certify that they will maintain the level of state effort for programs covered by the Recovery Act. As we reported in September 2009, most states revised the initial explanatory or conditional certifications they submitted to DOT after DOT’s April 22, 2009, guidance required states to recertify without conditions. All states that submitted conditional certifications submitted a second maintenance-of-effort certification to DOT without conditions, and DOT concluded that the form of each state certification was consistent with its April guidance. In June 2009, FHWA began to review each state’s maintenance-of-effort calculation to determine whether the method of calculation was consistent with DOT guidance and the amounts reported by the states for planned expenditures for highway investment was reasonable. For example, FHWA division offices evaluated, among other things, whether the amount certified (1) covered the period from February 17, 2009, through September 30, 2010, and (2) included in-kind contributions. FHWA division staff then determined whether the state certification needed (1) no further action, (2) further assessment, or (3) additional information. In addition, according to FHWA officials, their assessments indicated that FHWA needed to clarify the types of projects funded by the appropriations and the types of state expenditures that should be included in the maintenance-of-effort certifications. As a result of these findings, DOT issued guidance in June, July, and September 2009 and plans to issue additional guidance on these issues. In August 2009, FHWA staff in headquarters reviewed the FHWA division staff findings for each sate and proceeded to work with each FHWA division office to make sure their states submit revised certifications that will include the correct planned expenditures for highway investment— including aid to local agencies. FHWA officials said that of the 16 states and the District that we reviewed for this study, they currently expect to have 12 states submit revised certifications for state highway spending, while an additional 2 states are currently under review and may have to revise their certifications. DOT officials stated they have not determined when they will require the states to submit their revised consolidated certification. According to these officials, they want to ensure that the states have enough guidance to ensure that all programs covered by the Recovery Act maintenance-of-effort provisions have completed their maintenance-of-effort assessments and that the states have enough guidance to ensure that this is the last time that states have to amend their certifications. Most state officials we spoke with are committed to trying to meet their maintenance-of-effort requirements, but some are concerned about meeting the requirements. As we have previously reported, states face drastic fiscal challenges, and most states are estimating that their fiscal year 2009 and 2010 revenue collections will be well below estimated amounts. Although the state officials we spoke with are committed to trying to meet the maintenance-of-effort requirements, officials from seven state departments of transportation told us the current decline in state revenues creates major challenges in doing so. For example, Iowa, North Carolina, and Pennsylvania transportation officials said it may be more difficult for their departments to maintain their levels of transportation spending if state gas tax and other revenues, which are used to fund state highway and state-funded transportation projects, decline. In addition, Georgia officials also stated that reduced state gas-tax revenues pose a challenge to meeting its certified level of effort. Lastly, Mississippi and Ohio transportation officials stated that if their state legislatures reduce their respective department’s budget for fiscal year 2010 or 2011, the department may have difficulty maintaining its certified spending levels. For Recovery Act transit funds, we focused our review on the Transit Capital Assistance Program and the Fixed Guideway Infrastructure Investment program, which received approximately 91 percent of the Recovery Act transit funds, and on seven selected states that received funds from these programs. As of November 5, 2009, about $6.7 billion of the Recovery Act’s Transit Capital Assistance Program and the Fixed Guideway Infrastructure Investment program funds had been obligated nationwide. Almost 88 percent of Recovery Act Transit Capital Assistance program obligations are being used for upgrading transit facilities, improving bus fleets, and conducting preventive maintenance. In March 2009, $6.9 billion was apportioned to states and urbanized areas in all 50 states, the District, and five territories for transit projects and eligible transit expenses under the Recovery Act’s Transit Capital Assistance Program and $750 million was apportioned to qualifying urbanized areas under the Recovery Act’s Fixed Guideway Infrastructure Investment program. As of November 5, 2009, almost $6 billion of the Transit Capital Assistance Program funds had been obligated nationwide and $738 million of the Fixed Guideway Infrastructure Investment program funds has been obligated nationwide. Almost 88 percent of Recovery Act Transit Capital Assistance Program obligations are being used for upgrading transit facilities, improving bus fleets, and conducting preventive maintenance. Figure 4 shows Recovery Act Transit Capital Assistance Program obligations for urbanized and nonurbanized areas, by project type. As we reported in September 2009, many transit agency officials told us they decided to use Recovery Act funding for these types of projects since they are high-priority projects that support their agencies short- and long-term goals, can be started quickly, improve safety, or would otherwise not have been funded. This continues to be the case. Following are some examples: Transit infrastructure facilities: $2.8 billion, or 47 percent, of these funds obligated nationally have been for transit infrastructure construction projects and related activities, which range from large-scale projects, such as upgrading power substations, to a series of smaller projects, such as installing enhanced bus shelters. For example, in Pennsylvania, the Lehigh and Northampton Transportation Authority will implement a new passenger information technology system, install enhanced bus shelters and signage, and fund a new maintenance facility. Elsewhere, in North Carolina, the Charlotte Area Transit System will renovate its operating and maintenance facilities. In addition, in California, the San Diego Association of Governments plans to upgrade stations on a light-rail line and replace a section of a railroad trestle bridge. Bus fleets: $2 billion, or 33 percent, of Recovery Act funds obligated nationally have been for bus purchases or rehabilitation to replace aging vehicles or expand an agency’s fleet. For example, in Pennsylvania, the Lehigh and Northampton Transportation Authority plans to purchase 5 heavy-duty hybrid buses and the Southeastern Pennsylvania Transportation Authority plans to purchase 40 hybrid buses. In Iowa, the state’s smaller transit agencies are combining bus orders through the state’s department of transportation for 160 replacement buses and 20 buses to expand bus fleets in areas of growth around the state. In Colorado, both the Regional Transportation District in Denver and the Fort Collins-Transfort agency plan to purchase 6 buses each. Preventive maintenance: Another $515 million, or 9 percent, has been obligated for preventive maintenance. FTA considers preventive maintenance projects eligible capital expenditures under the Transit Capital Assistance Program. The remaining obligations have been used for rail car purchases and rehabilitation, leases, training, financing costs, and, in some limited cases, operating expenses—all of which are eligible expenditures. In particular, transit agencies reported using $5.2 million, or less than 1 percent, of the Transit Capital Assistance Program funds obligated by FTA for operating expenses. For example, the Des Moines transit agency has proposed to use approximately $788,800 for operating expenses, such as costs associated with personnel, facilities, and fuel. Funds from the Recovery Act Fixed Guideway Infrastructure Investment program may also be used for transit improvement projects; however this is limited to fixed guideway transit facilities and equipment. Recipients may use the funding on any capital purpose to include purchasing of rolling stock, improvements to rail tracks, signals and communications, and preventive maintenance. For example, in New York, FTA approved a $254.4 million grant from Recovery Act Fixed Guideway Infrastructure Investment funds for the Metropolitan Transportation Authority for a variety of maintenance and safety improvement projects, including the Jackson Avenue Vent Plant Rehabilitation project in Long Island City. In addition, northeastern Illinois’s Regional Transportation Authority is planning on using $95.5 million that was obligated from the Fixed Guideway Infrastructure Investment program to provide capital assistance for the modernization of existing fixed guideway systems. Metra (a regional commuter rail system that is part of the authority) plans to use these funds, in part, to repair tracks and rehabilitate stations. As we reported in September, recipients of transit Recovery Act funds, such as state departments of transportation and transit agencies, are subject to multiple reporting requirements. First, under section 1201(c) of the Recovery Act, recipients of transportation funds must submit periodic reports to DOT on the amount of federal funds appropriated, allocated, obligated, and reimbursed; the number of projects put out to bid, awarded, or for which work has begun or been completed; and the number of direct and indirect jobs created or sustained, among other things. DOT is required to collect and compile this information for Congress, and it issued its first report to Congress in May 2009. Second, under section 1512, recipients of Recovery Act funds, including but not limited to transportation funds, are to report quarterly on a number of measures, such as the use of funds and the number of jobs created or retained. To help recipients meet these reporting requirements, DOT and the Office of Management and Budget (OMB) have provided training and guidance. For example, DOT, through FTA, conducted a training session consisting of six webinars to provide information on the 1201(c) reporting requirements, such as who should submit these reports and what information is required. In addition, FTA issued guidance in September 2009 that provided a variety of information, including definitions of data elements. OMB also issued implementing guidance for section 1512 recipient reporting. For example, on June 22, 2009, OMB issued guidance to dispel some confusion related to reporting on jobs created and retained by providing, among other information, additional detail on how to calculate the relevant numbers. Despite this guidance, we reported in September that transit officials expressed concerns and confusion about the reporting requirement, and therefore we recommended that DOT continue its outreach to transit agencies to identify common problems in accurately fulfilling reporting requirements and provided additional guidance, as appropriate. In responding to our recommendation, DOT said it had conducted outreach, including providing technical assistance training and guidance, to recipients and will continue to assess the need to provide additional information. Through our ongoing audit work, we continued to find confusion among recipients about how to calculate the numbers of jobs created and saved that is required by DOT and OMB for their reporting requirements. First, a number of transit agencies continue to express confusion about calculating the number of jobs resulting from Recovery Act funding, especially with regard to using Recovery Act funds for purchasing equipment, such as new buses. For the section 1201(c) reporting requirement, transit agencies are not to report any jobs created or sustained from the purchase of buses. However, for the section 1512 recipient reporting requirement, transit agencies were required to report jobs created or retained from bus purchases, as long as these purchases were directly from the bus manufacturers and not from dealer lots. FTA held an outreach session in September 2009 with representatives from bus manufacturers and the American Public Transportation Association in an effort to standardize 1512 reporting methods and clarify recipient responsibilities under the federal recipient reporting requirements. FTA, the represented manufacturers, and American Public Transportation Association discussed a standardized methodology that was established by OMB for calculating the number of jobs created or retained by a bus purchase with Recovery Act funds. Under the agreed-upon methodology, bus manufacturers are to divide their total U.S. employment by their total U.S. production to determine a standard “full-time equivalents” (FTE)-to- production ratio. The bus manufacturers would then multiply that FTE-to- production ratio by a standard full-time schedule in order to provide transit agencies with a standard “direct job hours”-to-production ratio. This ratio is to include hours worked by administrative and support staff, so that the ratio reflects total employment. Bus manufacturers are to provide this ratio to the grantees, usually transit agencies, which the grantee then can use to calculate the number of jobs created or retained by a bus purchase. FTA officials told us that the selected group of bus manufacturers and FTA agreed that this methodology—which allows manufacturers to report on all purchases, regardless of size—simplifies the job reporting process. According to guidance, it is the responsibility of the transit agency to contact the manufacturer and ask how many jobs were related to that order. The manufacturers, in turn, are responsible for providing the transit agencies with information on the jobs per bus ratio at the time when buses are delivered. If the manufacturers cannot give the agencies a jobs estimate, the transit agencies must develop their own estimate. While representatives from three bus manufacturers we interviewed were using the agreed-upon methodology, they highlighted a number of different issues related to job estimates: Representatives from two bus manufacturers reported not knowing about the FTA methodology and used their own measures for jobs created or retained. For example, representatives from two manufacturers told us that the labor-hours required to produce a bus formed the basis for their calculation of FTEs and was then pro-rated based upon the amount of production taking place in the United States and the purchase amount funded by Recovery Act dollars. One bus manufacturer representative said it was difficult to prorate the jobs calculation by the proportion funded by the Recovery Act, as the agreed-upon methodology requires, since they did not always receive this information from the transit agencies. According to FTA officials, the manufacturer is only responsible for reporting the ratio of jobs created or retained per bus produced; the purchasing transit agencies are responsible for the prorating and final calculation of jobs created or retained. However, even bus manufacturers that were otherwise aware of FTA guidance and following FTA’s methodology would sometimes calculate the total number of jobs created or retained by a purchase. The second area of confusion we found involved the methodology recipients were using to calculate full-time equivalents for the recipient reporting requirements. As we reported in our November 2009 report on recipient reporting, the data element on jobs created or retained expressed in FTEs raised questions and concerns for some recipients. In section 5.2 of the June 22 guidance, OMB states that “the estimate of the number of jobs required by the Recovery Act should be expressed as FTE, which is calculated as the total hours worked in jobs retained divided by the number of hours in a full-time schedule, as defined by the recipient.” Further, “the FTE estimates must be reported cumulatively each calendar quarter.” In addition to issuing guidance, OMB and DOT provided several types of clarifying information to recipients as well as opportunities to interact and ask questions or receive help with the reporting process. However, FTE calculations varied depending on the period of performance the recipient reported on, and we found examples where the issue of a project period of performance created significant variation in the FTE calculation. For example, in Pennsylvania, each of four transit entities we interviewed used a different denominator to calculate the number of full-time equivalent jobs they reported on their recipient reports for the period ending September 30, 2009. Southeastern Pennsylvania Transportation Authority in Philadelphia used 1,040 hours as its denominator since it had projects under way in two previous quarters. Port Authority of Allegheny County prorated the hours based on the contractors’ start date, as well as to reflect that hours worked from September were not included due to lag time in invoice processing; Port Authority used 1,127 hours for contractors starting before April, 867 hours for contractors starting in the second quarter, and 347 hours for contractors starting in the third quarter. Lehigh and Northampton Transportation Authority in Allentown used 40 hours in the 1512 report they tried to submit, but, due to some confusion about the need for corrective action, the report was not filed. Finally, the Pennsylvania Department of Transportation reported using 1,248 hours, which was prorated by multiplying 8 hours per workday times the 156 workdays between February 17 and September 30, 2009. In several other of our selected states, this variation across transit programs’ period of performance for the FTE calculation also occurred. Our November report provided additional detail and recommendations to address the problems and confusion associated with how FTEs were calculated in the October recipient report. In summary Mr. Chairman, obligation of Recovery Act funds continues, and states are using these funds for a variety of purposes to address the particular transportation challenges in their states. DOT and the states remain confident that the March 2010 1-year deadline for obligating all highway funds will be met. It seems likely that funds will be available for obligation after the March deadline, although estimating precisely how much is difficult. This is because states continue to realize savings from contracts awarded at less than estimated costs, allowing the savings to be deobligated and obligated to other projects. In the weeks ahead, FHWA and the states have the opportunity to exercise diligence to both promptly seek deobligation of known savings and to identify projects that make sound use of Recovery Act funding. In addition, if any funds are withdrawn, they will be redistributed to states that have had all of their statewide funds obligated by March and will be available for obligation by FHWA. States that do not have all of their suballocated funds obligated by March will not be precluded from receiving redistributed funds. We will continue to monitor states’ and localities’ use of Recovery Act funds, including the rates of deobligation. In addition, there is a lack of understanding among transit agencies and bus manufacturers regarding the suggested methodology for calculating the number of jobs created or saved through bus purchases and the manufacturer’s role in the reporting process. We have previously recommended that OMB work with recipients to enhance understanding of the reporting process and that DOT continue its outreach to state departments of transportation and transit agencies to ensure recipients of Recovery Act funds are adequately fulfilling their reporting requirements. Implementing these recommendations will be key to addressing the lack of understanding we found related to reporting the number of jobs saved or created through bus purchases. We will continue to monitor states’ and localities’ use of Recovery Act funds in our future reviews. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee might have. For further information regarding this statement, please contact Katherine A. Siggerud at (202) 512-2834 or siggerudk@gao.gov, or A. Nicole Clowers at (202) 512-2834 or clowersa@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement are Lauren Calhoun, Steve Cohen, Catherine Colwell, Robert Ciszewski, Dean Gudicello, Heather Halliwell, Bert Japikse, Delwen Jones, Hannah Laufe, Les Locke, Tim Schindler, Raymond Sendejas, Tina Won Sherman, Crystal Wesco, Carrie Wilks, and Susan Zimmerman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The American Recovery and Reinvestment Act of 2009 (Recovery Act) included more than $48 billion for the Department of Transportation's (DOT) investment in transportation infrastructure, including highways, rail, and transit. This testimony--based on Government Accountability Office (GAO) report GAO-10-231 , issued on December 10, 2009, in response to a mandate under the Recovery Act--addresses (1) the uses of Recovery Act highway funding, including the types of projects states have funded and efforts by DOT and the states to meet the requirements of the act, and (2) the uses of Recovery Act transit funding and how recipients of Recovery Act funds are reporting information on the number of jobs created and retained under section 1512. In GAO-10-231 , GAO continues to examine the use of Recovery Act funds by 16 states and the District of Columbia (District), representing about 65 percent of the U.S. population and two-thirds of the federal assistance available through the act. GAO also obtained data from DOT on obligations and reimbursements for the Recovery Act's highway infrastructure and public transportation funds. GAO updates the status of agencies' efforts to implement previous GAO recommendations to help address a range of accountability issues as well as a matter for congressional consideration. No new recommendations are being made at this time. The report draft was discussed with federal and state officials, who generally agreed with its contents. Three-quarters of Recovery Act highway funds have been obligated, and reimbursements from the Federal Highway Administration (FHWA) are increasing. As of November 16, 2009, $20.4 billion had been obligated for just over 8,800 highway projects nationwide and $4.2 billion had been reimbursed nationwide by FHWA. States continue to dedicate most Recovery Act highway funds for pavement projects, but use of funds may vary depending on state transportation goals. Almost half of Recovery Act highway obligations nationally have been for pavement improvements--including resurfacing, rehabilitating, and reconstructing roadways. About 10 percent of funds has been obligated to replace and improve bridges, while 9 percent has been obligated to construct new roads and bridges. States are taking steps to meet Recovery Act highway requirements; for example, both state and federal officials believe the states are on track to obligate all highway funds by the March 2010 1-year deadline. However, two factors may affect some states' ability to meet the requirement. First, many states are awarding contracts for less than the original cost estimates; this allows states to have funds deobligated and use the savings for other projects, but additional projects must be identified quickly. Second, obligations for projects in suballocated areas, while increasing, are generally lagging behind obligations for statewide projects in most states and lagging considerably behind in a few states. In the weeks ahead, FHWA and the states have the opportunity to exercise diligence to both promptly seek deobligation of known savings and to identify projects that make sound use of Recovery Act funding. The Federal Transit Administration reports that the majority of transit funds have been obligated. As of November 5, 2009, almost $6 billion of the $6.9 billion appropriated for the Transit Capital Assistance Program had been obligated nationwide. Almost 88 percent of these obligations are being used for transit facilities, bus fleets, and preventive maintenance. The remaining funds are being used for rail car purchases, leases, and training, among other things--all of which are eligible expenses. Through our ongoing audit work, GAO continued to find confusion among recipients about how to calculate the numbers of jobs created and saved that is required by Recovery Act reporting requirements. First, a number of transit agencies continue to express confusion about calculating the number of jobs resulting from Recovery Act funding, especially with regard to using Recovery Act funds for purchasing equipment, such as new buses. The second area of confusion GAO found involved the methodology recipients were using to calculate full-time equivalents for the recipient reporting requirements. For example, in one state, four transit entities used a different denominator to calculate the number of full-time equivalent jobs they reported on their recipient reports for the period ending September 30, 2009. In its September 2009 report, GAO recommended that DOT continue its outreach to transit agencies regarding reporting requirements and provide additional guidance, as appropriate. DOT officials stated that they are continuing outreach to transit agencies and will continue to assess the need to provide additional information. |
Virtually all federal operations are supported by automated systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions and account for their resources without these information assets. Hence, the degree of risk caused by security weaknesses is high. For example, resources (such as federal payments and collections) could be lost or stolen, data could be modified or destroyed, and computer resources could be used for unauthorized purposes or to launch attacks on other computer systems. Sensitive information, such as taxpayer data, Social Security records, medical records, and proprietary business information could be inappropriately disclosed, browsed, or copied for improper or criminal purposes. Critical operations could be disrupted, such as those supporting national defense and emergency services. Finally, agencies’ missions could be undermined by embarrassing incidents, resulting in diminished confidence in their ability to conduct operations and fulfill their responsibilities. Recognizing the importance of securing federal systems and data, Congress passed FISMA, which sets forth a comprehensive framework for ensuring the effectiveness of security controls over information resources that support federal operations and assets. FISMA’s framework creates a cycle of risk management activities necessary for an effective security program, and are similar to the principles noted in our study of the risk management activities of leading private sector organizations—assessing risk, establishing a central management focal point, implementing appropriate policies and procedures, promoting awareness, and monitoring and evaluating policy and control effectiveness. More specifically, FISMA requires agency information security programs that, among other things, include ● periodic assessments of the risk; ● risk-based policies and procedures; ● subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate; ● security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency; ● periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, performed with a frequency depending on risk, but no less than annually; ● a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies; ● procedures for detecting, reporting, and responding to security ● plans and procedures to ensure continuity of operations. In addition, agencies must develop and maintain an inventory of major information systems that is updated at least annually. OMB and agency IGs play key roles under FISMA. FISMA specifies that, among other responsibilities, OMB is to develop policies, principles, standards, and guidelines on information security, and is required to report annually to Congress. OMB has provided instructions to federal agencies and their IGs for FISMA annual reporting. OMB’s reporting instructions focus on performance metrics such as certification and accreditation, testing of security controls, and security training. Its yearly guidance also requests IGs to report on their agencies’ efforts to complete their inventory of systems and requires agencies to identify any physical or electronic incidents involving the loss of, or unauthorized access to, personally identifiable information. FISMA also requires agency IGs to perform an independent evaluation of the information security programs and practices of the agency to determine the effectiveness of such programs and practices. Each evaluation is to include (1) testing of the effectiveness of information security policies, procedures, and practices of a representative subset of the agency’s information systems and (2) assessing compliance (based on the results of the testing) with FISMA requirements and related information security policies, procedures, standards, and guidelines. These required evaluations are then submitted by each agency to OMB in the form of a template that summarizes the results. In addition to the template submission, OMB encourages the IGs to provide any additional narrative in an appendix to the report that provides meaningful insight into the status of the agency’s security or privacy program. Since May 2006, federal agencies have reported a spate of security incidents that put sensitive data at risk. Personally identifiable information about millions of Americans has been lost, stolen, or improperly disclosed, thereby exposing those individuals to loss of privacy, identity theft, and financial crimes. Agencies have experienced a wide range of incidents involving data loss or theft, computer intrusions, and privacy breaches, underscoring the need for improved security practices. The following reported examples illustrate that a broad array of federal information and assets are at risk. ● The Department of Veterans Affairs (VA) announced that computer equipment containing personally identifiable information on approximately 26.5 million veterans and active duty members of the military was stolen from the home of a VA employee. Until the equipment was recovered, veterans did not know whether their information was likely to be misused. In June, VA sent notices to the affected individuals that explained the breach and offered advice concerning steps to reduce the risk of identity theft. The equipment was eventually recovered, and forensic analysts concluded that it was unlikely that the personal information contained therein was compromised. ● A Centers for Medicare & Medicaid Services contractor reported the theft of a contractor employee’s laptop computer from his office. The computer contained personal information including names, telephone numbers, medical record numbers, and dates of birth of 49,572 Medicare beneficiaries. ● The Department of Agriculture (USDA) was notified that it had posted personal information on a Web site. Analysis by USDA later determined that the posting had affected approximately 38,700 individuals, who had been awarded funds through the Farm Service Agency or Rural Development program. That same day, all identification numbers associated with USDA funding were removed from the Web site. USDA is continuing its effort to identify and contact all those who may have been affected. ● The Transportation Security Administration (TSA) announced a data security incident involving approximately 100,000 archived employment records of individuals employed by the agency from January 2002 until August 2005. An external hard drive containing personnel data, such as Social Security number, date of birth, payroll information, and bank account and routing information, was discovered missing from a controlled area at the TSA Headquarters Office of Human Capital. ● The Census Bureau reported 672 missing laptops, of which 246 contained some degree of personal data. Of the missing laptops containing personal information, almost half (104) were stolen, often from employees’ vehicles, and another 113 were not returned by former employees. Commerce reported that employees were not held accountable for not returning their laptops. ● Officials at the Department of Commerce’s Bureau of Industry and Security discovered a security breach in July 2006. In investigating this incident, officials were able to review firewall logs for an 8- month period prior to the initial detection of the incident, but were unable to clearly define the amount of time that perpetrators were inside its computers, or find any evidence to show that data was lost as a result. ● The Treasury Inspector General for Tax Administration reported that approximately 490 computers at the Internal Revenue Service (IRS) were lost or stolen between January 2003 and June 2006. Additionally, 111 incidents occurred within IRS facilities, suggesting that employees were not storing their laptop computers in a secured area while the employees were away from the office. The IG concluded that it was very likely that a large number of the lost or stolen computers contained unencrypted data and also found other computer devices, such as flash drives, CDs, and DVDs, on which sensitive data were not always encrypted. ● The Department of State experienced a breach on its unclassified network, which daily processes about 750,000 e-mails and instant messages from more than 40,000 employees and contractors at 100 domestic and 260 overseas locations. The breach involved an e-mail containing what was thought to be an innocuous attachment. However, the e-mail contained code to exploit vulnerabilities in a well-known application for which no security patch existed at that time. Because the vendor was unable to expedite testing and deploy a new patch, the department developed its own temporary fix to protect systems from being further exploited. In addition, the department sanitized the infected computers and servers, rebuilt them, changed all passwords, installed critical patches, and updated their anti-virus software. Based on the experience of VA and other federal agencies in responding to data breaches, we identified numerous lessons learned regarding how and when to notify government officials, affected individuals, and the public. These lessons have largely been addressed in guidance issued by OMB. OMB has issued several policy memorandums over the past 13 months. For example, it sent memorandums to agencies to reemphasize their responsibilities under law and policy to (1) appropriately safeguard sensitive and personally identifiable information, (2) train employees on their responsibilities to protect sensitive information, and (3) report security incidents. In May 2007, OMB issued additional detailed guidelines to agencies on safeguarding against and responding to the breach of personally identifiable information, including developing and implementing a risk-based breach notification policy, reviewing and reducing current holdings of personal information, protecting federal information accessed remotely, and developing and implementing a policy outlining the rules of behavior, as well as identifying consequences and potential corrective actions for failure to follow these rules. As illustrated by numerous security incidents, significant weaknesses continue to threaten the confidentiality, integrity, and availability of critical information and information systems used to support the operations, assets, and personnel of federal agencies. In their fiscal year 2006 financial statement audit reports, 21 of 24 major agencies indicated that deficient information security controls were either a reportable condition or material weakness (see fig. 1). Our audits continue to identify similar conditions in both financial and non-financial systems, including agencywide weaknesses as well as weaknesses in critical federal systems. Persistent weaknesses appear in five major categories of information system controls: (1) access controls, which ensure that only authorized individuals can read, alter, or delete data; (2) configuration management controls, which provide assurance that only authorized software programs are implemented; (3) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection; (4) continuity of operations planning, which provides for the prevention of significant disruptions of computer-dependent operations; and (5) an agencywide information security program, which provides the framework for ensuring that risks are understood and that effective controls are selected and properly implemented. Figure 2 shows the number of major agencies that had weaknesses in these five areas. A basic management control objective for any organization is to protect data supporting its critical operations from unauthorized access, which could lead to improper modification, disclosure, or deletion of the data. Access controls, which are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities, can be both electronic and physical. Electronic access controls include the use of passwords, access privileges, encryption, and audit logs. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. Most agencies did not implement controls to sufficiently prevent, limit, or detect access to computer networks, systems, or information. Our analysis of IG, agency, and our own reports uncovered that agencies did not have adequate access controls in place to ensure that only authorized individuals could access or manipulate data. Of the 24 major agencies, 22 had access control weaknesses. For example, agencies did not consistently (1) identify and authenticate users to prevent unauthorized access, (2) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate, (3) establish sufficient boundary protection mechanisms, (4) apply encryption to protect sensitive data on networks and portable devices, and (5) log, audit, and monitor security-relevant events. Agencies also lacked effective controls to restrict physical access to information assets. For instance, many of the data losses that occurred at federal agencies over the past few years were a result of physical thefts or improper safeguarding of systems, including laptops and other portable devices. In addition to access controls, other important controls should be in place to protect the confidentiality, integrity, and availability of information. These controls include policies, procedures, and techniques addressing configuration management to ensure that software patches are installed in a timely manner; appropriately segregating incompatible duties; and establishing continuity of operations planning. Agencies did not always configure network devices and services to prevent unauthorized access and ensure system integrity, such as patching key servers and workstations in a timely manner; assign incompatible duties to different individuals or groups so that one individual does not control all aspects of a process or transaction; and maintain or test continuity of operations plans for key information systems. Weaknesses in these areas increase the risk of unauthorized use, disclosure, modification, or loss of information. An underlying cause for information security weaknesses identified at federal agencies is that they have not yet fully or effectively implemented all the FISMA-required elements for an agencywide information security program. An agencywide security program, required by FISMA, provides a framework and continuing cycle of activity for assessing and managing risk, developing and implementing security policies and procedures, promoting security awareness and training, monitoring the adequacy of the entity’s computer-related controls through security tests and evaluations, and implementing remedial actions as appropriate. Our analysis determined that at least 18 of the 24 major federal agencies had not fully implemented agencywide information security programs. Results of our recent work illustrate that agencies often did not adequately design or effectively implement policies for elements key to an information security program. We identified weaknesses in information security program activities, such as agencies’ risk assessments, information security policies and procedures, security planning, security training, system tests and evaluations, and remedial actions. For example, ● One agency had no documented process for conducting risk assessments, while another agency had outdated risk assessments. Another agency had assessed and categorized system risk levels and conducted risk assessments, but did not identify many of the vulnerabilities we found and had not subsequently assessed the risks associated with them. ● Agencies had developed and documented information security policies, standards, and guidelines for information security, but did not always provide specific guidance on how to guard against significant security weaknesses regarding topics such as physical access, Privacy Act-protected data, wireless configurations, and business impact analyses. ● Instances existed where security plans were incomplete or not up- to-date. ● Agencies did not ensure all information security employees and contractors, including those who have significant information security responsibilities, received sufficient training. ● Our report on testing and evaluating security controls revealed that agencies had not adequately designed and effectively implemented policies for testing their security controls in accordance with OMB and NIST guidance. Further, agencies did not always address other important elements, such as the definition of roles and responsibilities of personnel performing tests, identification and testing of security controls common to multiple systems, and the frequency of periodic testing. In other cases, agencies had not tested controls for all of their systems. ● Our report on security controls testing also revealed that seven agencies did not have policies to describe a process for incorporating weaknesses identified during periodic security control testing into remedial actions. In our other reviews, agencies indicated that they had corrected or mitigated weaknesses; however, we found that those weaknesses still existed. In addition, we reviewed agencies’ system self-assessments and identified weaknesses not documented in their remedial action plans. We also found that some deficiencies had not been corrected in a timely manner. As a result, agencies do not have reasonable assurance that controls are implemented correctly, operating as intended, or producing the desired outcome with respect to meeting the security requirements of the agency, and responsibilities may be unclear, misunderstood, and improperly implemented. Furthermore, agencies may not be fully aware of the security control weaknesses in their systems, thereby leaving their information and systems vulnerable to attack or compromise. Until agencies effectively and fully implement agencywide information security programs, federal data and systems will not be adequately safeguarded to prevent disruption, unauthorized use, disclosure, and modification. Recent reports by GAO and IGs show that while agencies have made some progress, persistent weaknesses continue to place critical federal operations and assets at risk. In our reports, we have made hundreds of recommendations to agencies to correct specific information security weaknesses. The following examples illustrate the effect of these weaknesses at various agencies and for critical systems. ● Independent external auditors identified over 130 information technology control weaknesses affecting the Department of Homeland Security’s (DHS) financial systems during the audit of the department’s fiscal year 2006 financial statements. Weaknesses existed in all key general controls and application controls. For example, systems were not certified and accredited in accordance with departmental policy; policies and procedures for incident response were inadequate; background investigations were not properly conducted; and security awareness training did not always comply with departmental requirements. Additionally, users had weak passwords on key servers that process and house DHS financial data, and workstations, servers, and network devices were configured without necessary security patches. Further, changes to sensitive operating system settings were not always documented; individuals were able to perform incompatible duties such as changing, testing, and implementing software; and service continuity plans were not consistently or adequately tested. As a result, material errors in DHS’ financial data may not be detected in a timely manner. ● The Department of Health and Human Services (HHS) had not consistently implemented effective electronic access controls designed to prevent, limit, and detect unauthorized access to sensitive financial and medical information at its operating divisions and contractor-owned facilities. Numerous electronic access control vulnerabilities related to network management, user accounts and passwords, user rights and file permissions, and auditing and monitoring of security-related events existed in its computer networks and systems. In addition, weaknesses existed in controls designed to physically secure computer resources, conduct suitable background investigations, segregate duties appropriately, and prevent unauthorized changes to application software. These weaknesses increase the risk that unauthorized individuals could gain access to HHS information systems and inadvertently or deliberately disclose, modify, or destroy the sensitive medical and financial data that the department relies on to deliver its services. ● The Securities and Exchange Commission had made important progress addressing previously reported information security control weaknesses. However, 15 new information security weaknesses pertaining to access controls and configuration management existed in addition to 13 previously identified weaknesses that remain unresolved. For example, the Securities and Exchange Commission did not have current documentation on the privileges granted to users of a major application, did not securely configure certain system settings, or did not consistently install all patches to its systems. In addition, the commission did not sufficiently test and evaluate the effectiveness of controls for a major system as required by its certification and accreditation process. ● The IRS had made limited progress toward correcting previously reported information security weaknesses at two data processing sites. IRS had not consistently implemented effective access controls to prevent, limit, or detect unauthorized access to computing resources from within its internal network. These access controls included those related to user identification and authentication, authorization, cryptography, audit and monitoring, and physical security. In addition, IRS faces risks to its financial and sensitive taxpayer information due to weaknesses in configuration management, segregation of duties, media destruction and disposal, and personnel security controls. ● The Federal Aviation Administration (FAA) had significant weaknesses in controls that are designed to prevent, limit, and detect access to those air traffic control systems. For example, the agency was not adequately managing its networks, system patches, user accounts and passwords, or user privileges, and it was not always logging and auditing security-relevant events. As a result, it was at increased risk of unauthorized system access, possibly disrupting aviation operations. While acknowledging these weaknesses, agency officials stated that because portions of their systems are custom built and use older equipment with special- purpose operating systems, proprietary communication interfaces, and custom-built software, the possibilities for unauthorized access are limited. Nevertheless, the proprietary features of these systems do not protect them from attack by disgruntled current or former employees, who understand these features, or from sophisticated hackers. ● Certain information security controls over a critical internal Federal Bureau of Investigation (FBI) network were ineffective in protecting the confidentiality, integrity, and availability of information and information resources. Specifically, FBI did not consistently (1) configure network devices and services to prevent unauthorized insider access and ensure system integrity; (2) identify and authenticate users to prevent unauthorized access; (3) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate; (4) apply strong encryption techniques to protect sensitive data on its networks; (5) log, audit, or monitor security-related events; (6) protect the physical security of its network; and (7) patch key servers and workstations in a timely manner. Taken collectively, these weaknesses place sensitive information transmitted on the network at risk of unauthorized disclosure or modification, and could result in a disruption of service, increasing the bureau’s vulnerability to insider threats. ● The Federal Reserve had not effectively implemented information system controls to protect sensitive data and computing resources for the distributed-based systems and the supporting network environment relevant to Treasury auctions. Specifically, the Federal Reserve did not consistently (1) identify and authenticate users to prevent unauthorized access; (2) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate; (3) implement adequate boundary protections to limit connectivity to systems that process Bureau of the Public Debt (BPD) business; (4) apply strong encryption technologies to protect sensitive data in storage and on its networks; (5) log, audit, or monitor security-related events; and (6) maintain secure configurations on servers and workstations. As a result, auction information and computing resources for key distributed-based auction systems maintained and operated on behalf of BPD were at an increased risk of unauthorized and possibly undetected use, modification, destruction, and disclosure. Furthermore, other applications that share common network resources with the distributed-based systems may face similar risks. ● Although the Centers for Medicare & Medicaid Services had many information security controls in place that had been designed to safeguard the communication network, key information security controls were either missing or had not always been effectively implemented. For example, the network had control weaknesses in areas such as user identification and authentication, user authorization, system boundary protection, cryptography, and audit and monitoring of security-related events. Taken collectively, these weaknesses place financial and personally identifiable medical information transmitted on the network at increased risk of unauthorized disclosure and could result in a disruption in service. Despite having persistent information security weaknesses, federal agencies have continued to report steady progress in implementing certain information security requirements. For fiscal year 2006 reporting (see fig. 3), governmentwide percentages increased for employees and contractors receiving security awareness training and employees with significant security responsibilities receiving specialized training. Percentages also increased for systems that had been tested and evaluated at least annually, systems with tested contingency plans, and systems that had been certified and accredited. However, IGs at several agencies sometimes disagreed with the information reported by the agency and have identified weaknesses in the processes used to implement these and other security program activities. The majority of agencies reported that more than 90 percent of their employees and contractors received IT security awareness training in fiscal year 2006. This is an increase from what we reported in 2006, where approximately 81 percent of employees governmentwide received IT security awareness training. There has been a slight increase in the number of employees who have security responsibilities and received specialized security training since our last report—almost 86 percent of the selected employees had received specialized training in fiscal year 2006, compared with about 82 percent in fiscal year 2005. Although agencies have reported improvements both in the number of employees receiving security awareness training and the number of employees who have significant security responsibilities and received specialized training, several agencies exhibit training weaknesses. For example, according to agency IGs, five major agencies reported challenges in ensuring that contractors had received security awareness training. In addition, reports from IGs at two major agencies indicated that security training across components was inconsistent. Five agencies also noted that weaknesses still exist in ensuring that all employees who have specialized responsibilities receive specialized training, as policies and procedures for this type of training are not always clear. Further, the majority of agency IGs disagree with their agencies’ reporting of individuals who have received security awareness training. Figure 4 shows a comparison between agency and IG reporting of the percentage of employees receiving security awareness training. If all agency employees and contractors do not receive security awareness training, agencies risk security breaches resulting from employees who are not fully aware of their security roles and responsibilities. In 2006, federal agencies reported testing and evaluating security controls for 88 percent of their systems, up from 73 percent in 2005, including increases in testing high-risk systems. However, shortcomings exist in agencies’ testing and evaluating of security controls. For example, IGs reported that not all systems had been tested and evaluated at least annually, including some high impact systems, and that weaknesses existed in agencies’ monitoring of contractor systems or facilities. As a result, agencies may not have reasonable assurance that controls are implemented correctly, are operating as intended, and are producing the desired outcome with respect to meeting the security requirements of the agency. In addition, agencies may not be fully aware of the security control weaknesses in their systems, thereby leaving the agencies’ information and systems vulnerable to attack or compromise. The number of systems with tested contingency plans varied by the risk level of the system. Federal agencies reported that 77 percent of total systems had contingency plans that had been tested, up from 61 percent in 2005. However, on average, high-risk systems had the smallest percentage of tested contingency plans compared to other risk levels —only 64 percent of high-risk systems had tested contingency plans. Several agencies had specific weaknesses in developing and testing contingency plans. For example, the IG of a major agency noted that contingency planning had not been completed for certain critical systems. Another major agency IG noted that the agency had weaknesses in three out of four tested contingency plans—the plans were inaccurate, incomplete, or outdated, did not meet department and federal requirements, and were not tested in accordance with department and federal government requirements. Without developing contingency plans and ensuring that they are tested, the agency increases its risk that it will not be able to effectively recover and continue operations when an emergency occurs. A complete and accurate inventory of major information systems is essential for managing information technology resources, including the security of those resources. The total number of agency systems is a key element in OMB’s performance measures, in that agency progress is indicated by the percentage of total systems that meet specific information security requirements such as testing systems annually, testing contingency plans, and certifying and accrediting systems. Thus, inaccurate or incomplete data on the total number of agency systems affects the percentage of systems shown as meeting the requirements. FISMA requires that agencies develop, maintain, and annually update an inventory of major information systems operated by the agency or under its control. The total number of systems in some agencies’ inventories varied widely from 2005 to 2006. In one case, an agency had a 300 percent increase in the number of systems, while another had approximately a 50 percent reduction in the number of their systems. IGs identified some problems with agencies’ inventories. For example, IGs at two large agencies reported that their agencies still did not have complete inventories, while another questioned the reliability of its agency’s inventory since that agency relied on its components to report the number of systems and did not validate the numbers. Without complete, accurate inventories, agencies cannot efficiently maintain and secure their systems. In addition, the performance measures used to assess agencies’ progress may not accurately reflect the extent to which these security practices have been implemented. Federal agencies continue to report increasing percentages of systems completing certification and accreditation from fiscal year 2005 reporting. For fiscal year 2006, 88 percent of agencies’ systems governmentwide were reported as certified and accredited as compared to 85 percent in 2005. In addition, 23 agencies reported certifying and accrediting more than 75 percent of their systems, an increase from 21 agencies in 2005. Although agencies reported increases in the overall percentage of systems certified and accredited, results of work by their IGs showed that agencies continue to experience weaknesses in the quality of this metric. For fiscal year 2006, ten IGs rated their agencies’ certification and accreditation process as poor or failing— an increase from last year. In at least three instances of agencies reporting certification and accreditation percentages over 90 percent, their IG reported that the process was poor. Moreover, IGs continue to identify specific weaknesses with key documents in the certification and accreditation process such as risk assessments and security plans not being completed per NIST guidance or finding those items missing from certification and accreditation packages. IG reports highlighted weaknesses in security plans such as agencies not using NIST guidance, not identifying controls that were in place, not including minimum controls, and not updating plans to reflect current conditions. In other cases, systems were certified and accredited, but controls or contingency plans were not properly tested. Because of these discrepancies and weaknesses, reported certification and accreditation progress may not be providing an accurate reflection of the actual status of agencies’ implementation of this requirement. Furthermore, agencies may not have assurance that accredited systems have controls in place that properly protect those systems. Agencies had not always implemented security configuration policies. Twenty-three of the major federal agencies reported that they currently had an agencywide security configuration policy. Although 21 IGs agreed that their agency had such a policy, they did not agree that the implementation was always as high as agencies reported. To illustrate, one agency reported implementing configuration policy for a particular platform 96 to 100 percent of the time, while their IG reported that the agency implemented that policy only 0 to 50 percent of the time. Another IG noted that three of the agency’s components did not have overall configuration policies and that other components, which had the policies, did not take into account applicable platforms. If minimally acceptable configuration requirements policies are not properly implemented and applied to systems, agencies will not have assurance that products are configured adequately to protect those systems, which could increase their vulnerability and make them easier to compromise. Shortcomings exist in agencies’ security incident reporting procedures. According to the US-CERT annual report for fiscal year 2006, federal agencies reported a record number of incidents, with a notable increase in incidents reported in the second half of the year. However, the number of incidents reported is likely to be inaccurate because of inconsistencies in reporting at various levels. For example, one agency reported no incidents to US-CERT, although it reported more than 800 incidents internally and to law enforcement authorities. In addition, analysis of reports from three agencies indicated that procedures for reporting incidents locally were not followed—two where procedures for reporting incidents to law enforcement authorities were not followed and one where procedures for reporting incidents to US-CERT were not followed. Several IGs also noted specific weaknesses in incident procedures such as components not reporting incidents reliably, information being omitted from incident reports, and reporting time requirements not being met. Without properly accounting for and analyzing security problems and incidents, agencies risk losing valuable information needed to prevent future exploits and understand the nature and cost of threats directed at the agency. Remedial Actions to Address Deficiencies in Information Security Policies, Procedures, and Practices IGs reported weaknesses in their agency’s remediation process. According to IG assessments, 16 of the 24 major agencies did not almost always incorporate information security weaknesses for all systems into their remediation plans. They found that vulnerabilities from reviews were not always being included in remedial actions. They also highlighted other weaknesses that included one agency having an unreliable process for prioritizing weaknesses and another using inconsistent criteria for defining weaknesses to include in those plans. Without a sound remediation process, agencies cannot be assured that information security weaknesses are efficiently and effectively corrected. Periodic reporting of performance measures for FISMA requirements and related analysis provides valuable information on the status and progress of agency efforts to implement effective security management programs; however, opportunities exist to enhance reporting under FISMA and the independent evaluations completed by IGs. In previous reports, we have recommended that OMB improve FISMA reporting by clarifying reporting instructions and requesting IGs to report on the quality of additional performance metrics. OMB has taken steps to enhance its reporting instructions. For example, OMB added questions regarding incident detection and assessments of system inventory. However, the current metrics do not measure how effectively agencies are performing various activities. Current performance measures offer limited assurance of the quality of agency processes that implement key security policies, controls, and practices. For example, agencies are required to test and evaluate the effectiveness of the controls over their systems at least once a year and to report on the number of systems undergoing such tests. However, there is no measure of the quality of agencies’ test and evaluation processes. Similarly, OMB’s reporting instructions do not address the quality of other activities such as risk categorization, security awareness training, or incident reporting. OMB has recognized the need for assurance of quality for agency processes. For example, it specifically requested that the IGs evaluate the certification and accreditation process. The qualitative assessments of the process allows the IG to rate its agency’s certification and accreditation process using the terms “excellent,” “good,” “satisfactory,” “poor,” or “failing.” Providing information on the quality of the processes used to implement key control activities would further enhance the usefulness of the annually reported data for management and oversight purposes. Currently, OMB reporting guidance and performance measures do not include complete reporting on certain key FISMA-related activities. For example, FISMA requires each agency to include policies and procedures in its security program that ensure compliance with minimally acceptable system configuration requirements, as determined by the agency. As we previously reported, maintaining up-to-date patches is key to complying with this requirement. As such, we recommended that OMB address patch management in its FISMA reporting instructions. Although OMB addressed patch management in its 2004 FISMA reporting instructions, it no longer requests this information. As a result, OMB and the Congress lack information that could identify governmentwide issues regarding patch management. This information could prove useful in demonstrating whether or not agencies are taking appropriate steps for protecting their systems. Although the IGs conducted annual evaluations, they did not have a common approach. We received copies of all 24 IG FISMA template submissions and 20 IG FISMA reports. For these efforts, the scope and methodology of IGs’ evaluations varied across agencies. For example: ● According to their FISMA reports, certain IGs reported interviewing officials and reviewing agency documentation, while others indicated conducting tests of implementation plans (e.g. security plans). ● Multiple IGs indicated in the scope and methodology sections of their reports that their reviews were focused on selected components, whereas others did not make any reference to the breadth of their review. ● Several reports were solely comprised of a summary of relevant information security audits conducted during the fiscal year, while others included additional evaluation that addressed specific FISMA-required elements, such as risk assessments and remedial actions. ● The percentage of systems reviewed varied; 22 of 24 IGs tested the information security program effectiveness on a subset of systems; two IGs did not review any systems. ● One IG noted that the agency’s inventory was missing certain Web applications and concluded that the agency’s inventory was only 0-50 percent complete, although it also noted that, due to time constraints, it was unable to determine whether other items were missing. ● Two IGs indicated basing a portion of their template submission solely on information provided to them by the agency, without conducting further investigation. ● Some reviews were limited due to difficulties in verifying information provided to them by agencies. Specifically, certain IGs stated that they were unable to conduct evaluations of their respective agency’s inventory because the information provided to them by the agency at that time was insufficient (i.e. incomplete or unavailable). The lack of a common methodology, or framework, has culminated in disparities in audit scope, methodology, and content. As a result, the collective IG community may be performing their evaluations without optimal effectiveness and efficiency. A commonly used framework or methodology for the FISMA independent evaluations is a mechanism that could provide improved effectiveness, increased efficiency, and consistency of application. Such a framework may provide improved effectiveness of the annual evaluations by ensuring that compliance with FISMA and all related guidance, laws, and regulations are considered in the performance of the evaluation. IGs may be able to use the framework to be more efficient by focusing evaluative procedures on areas of higher risk and by following an integrated approach designed to gather evidence efficiently. Without a consistent framework, work completed by IGs may not provide information that is comparable for oversight entities to assess the governmentwide information security posture. In summary, as illustrated by recent incidents at federal agencies, significant weaknesses in information security controls threaten the confidentiality, integrity, and availability of critical information and information systems used to support the operations, assets, and personnel of federal agencies. Almost all major agencies exhibit weaknesses in one or more areas of information security controls. Despite these persistent weaknesses, agencies have continued to report steady progress in implementing certain information security requirements. However, IGs sometimes disagreed with the agency’s reported information and identified weaknesses in the processes used to implement these and other security program activities. Further, opportunities exist to enhance reporting under FISMA and the independent evaluations completed by IGs. Mr. Chairman, this concludes my statement. I am happy to answer any questions at this time. If you have any questions regarding this report, please contact me at (202) 512-6244 or wilshuseng@gao.gov. Other key contributors to this report include Jeffrey Knott (Assistant Director), Larry Crosland, Nancy Glover, Min Hyun, and Jayne Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | For many years, GAO has reported that weaknesses in information security are a widespread problem with potentially devastating consequences--such as intrusions by malicious users, compromised networks, and the theft of personally identifiable information--and has identified information security as a governmentwide high-risk issue. Concerned by reports of significant vulnerabilities in federal computer systems, Congress passed the Federal Information Security Management Act of 2002 (FISMA), which permanently authorized and strengthened the information security program, evaluation, and reporting requirements for federal agencies. In this testimony, GAO discusses security incidents reported at federal agencies, the continued weaknesses in information security controls at major federal agencies, agencies' progress in performing key control activities, and opportunities to enhance FISMA reporting and independent evaluations. To address these objectives, GAO analyzed agency, inspectors general (IG), and GAO issued and draft reports on information security. Federal agencies have recently reported a spate of security incidents that put sensitive data at risk. Personally identifiable information about millions of Americans has been lost, stolen, or improperly disclosed, thereby exposing those individuals to loss of privacy, identity theft, and financial crimes. The wide range of incidents involving data loss or theft, computer intrusions, and privacy breaches underscore the need for improved security practices. As illustrated by these security incidents, significant weaknesses in information security controls threaten the confidentiality, integrity, and availability of critical information and information systems used to support the operations, assets, and personnel of federal agencies. Almost all of the major federal agencies had weaknesses in one or more areas of information security controls. Most agencies did not implement controls to sufficiently prevent, limit, or detect access to computer networks, systems, or information. For example, agencies did not consistently identify and authenticate users to prevent unauthorized access, apply encryption to protect sensitive data on networks and portable devices, and restrict physical access to information assets. In addition, agencies did not always manage the configuration of network devices to prevent unauthorized access and ensure system integrity, such as patching key servers and workstations in a timely manner; assign incompatible duties to different individuals or groups so that one individual does not control all aspects of a process or transaction; and maintain or test continuity of operations plans for key information systems. An underlying cause for these weaknesses is that agencies have not fully or effectively implemented agencywide information security programs. Nevertheless, federal agencies have continued to report steady progress in implementing certain information security requirements. However, IGs at several agencies sometimes disagreed with the agency's reported information and identified weaknesses in the processes used to implement these and other security program activities. Further, opportunities exist to enhance reporting under FISMA and the independent evaluations completed by IGs. |
FGM/C comprises all procedures that involve partial or total removal of the external female genitalia, or other injury to the female genital organs for non-medical reasons. The World Health Organization (WHO) classifies FGM/C into four major types: Type I (clitoridectomy) partially or totally removes the clitoris and/or the skin around it; Type II (excision) partially or totally removes the clitoris and the labia minora, with or without excision of the labia majora; Type III (infibulation) narrows the vaginal opening through the creation of a covering seal formed by cutting and repositioning the labia minora and/or labia majora, sometimes through stitching, with or without removal of the clitoris; and Type IV (other) includes all other harmful procedures, including pricking, piercing, incising, and scraping the genital area for non- medical purposes. According to the WHO, it is estimated that 90 percent of cases are Types I, II, or IV; 10 percent are Type III, the most extreme form of FGM/C. The type of FGM/C commonly practiced varies by country, according to survey data presented by UNICEF. For example, survey data show that more than 20 percent of girls who underwent FGM/C in Djibouti, Eritrea, Niger, Senegal, and Somalia, experienced Type III (infibulation), whereas Type III represented 1 or 2 percent of cases in other countries, such as Egypt. Appendix II presents UNICEF data showing the percentage distribution of girls subjected to FGM/C by type of procedure in countries where data were available. The WHO notes that in every society where it is practiced, FGM/C is a manifestation of gender inequality that is deeply entrenched in traditional social, economic, and political structures. The practice is often considered a necessary part of raising a girl properly, and a way to prepare her for adulthood and marriage. FGM/C is often motivated by beliefs about what is considered proper sexual behavior and is linked to premarital virginity and marital fidelity. FGM/C has no health benefits and can have numerous short- and long- term adverse health consequences, according to the WHO. Short-term consequences can include severe pain, swelling, delayed or incomplete healing, and shock, as well as infections and excessive bleeding, which can lead to death. Long-term consequences may include chronic pain and infections, scar tissue, and menstrual and urinary tract problems. In addition, FGM/C can lead to sexual problems and obstetric complications, which increase the need for Caesarean sections and risks to the health of newborns. Transmission of HIV remains a longer-term risk because of increased risk of bleeding during intercourse as a result of FGM/C. Available data from nationally representative surveys show that FGM/C is concentrated in 30 countries and at least 200 million girls and women alive today have undergone some form of FGM/C, according to UNICEF (see fig. 1). Evidence suggests that FGM/C exists in some places in South America, such as Colombia, and elsewhere in the world including in India, Malaysia, Oman, Saudi Arabia, and the United Arab Emirates; however, no nationally representative data on FGM/C were available for these countries, according to UNICEF. The practice is also found in Europe, Australia, and North America, which are destinations for migrants from countries where the practice still occurs. UNICEF also estimates that more than 3 million girls annually are at risk for FGM/C in Africa. In some countries, including Djibouti, Guinea, and Somalia, the percentage of girls and women, aged 15 to 49, who have undergone FGM/C is over 90 percent. In most of the countries with available data, the majority of girls are cut before the age of 5, according to UNICEF. However, in Somalia, Egypt, Chad, and the Central African Republic, at least 80 percent of girls who have undergone FGM/C were cut between the ages of 5 and 14. UNICEF data show that the practice is becoming less common in many high-prevalence countries. For example, in Kenya and Tanzania, women aged 45 to 49 are approximately three times more likely to have undergone FGM/C than girls aged 15 to 19. In most countries where FGM/C is practiced, the majority of girls and women think it should end, and the percentage of females who support FGM/C is substantially lower than the share of girls and women who have undergone the procedure, according to UNICEF. In addition, UNICEF reported in 2013 that 24 countries where FGM/C is prevalent have enacted legislation related to FGM/C (see app. III). These laws reportedly vary in their scope. UNICEF reports that some ban the practice only in medical facilities; others ban the practice anywhere. In 1993, the World Conference on Human Rights in Vienna recognized violence against women as a human rights violation, and the UN General Assembly included FGM/C in the definition of violence against women, stating that it violates women’s right to be free from cruel, inhuman, or degrading treatment. FGM/C also deprives girls and women from making the decision about a procedure that has a lasting effect on their bodies and infringes on their autonomy and control over their lives, according to the WHO. In December 2012, the UN General Assembly adopted a resolution urging member states to condemn and work to eliminate all harmful practices that affect women and girls, in particular FGM/C, and to take all necessary measures, including enacting and enforcing legislation to prohibit FGM/C. Two years later, the UN General Assembly adopted another resolution calling upon member states to develop, support, and implement comprehensive and integrated strategies for the prevention of FGM/C, including training of medical personnel, social workers, and community and religious leaders to ensure that they provide competent, supportive services and care to women and girls who are at risk of or who have undergone FGM/C. In September 2015, the UN General Assembly formally adopted the 2030 Agenda for Sustainable Development, along with a set of 17 Sustainable Development Goals and 169 associated targets. One of the 17 goals is “achieve gender equality and empower all women and girls” and one of the targets for this goal is to “eliminate all harmful practices, such as child, early and forced marriage and female genital mutilation.” In 2008, UNFPA and UNICEF established the Joint Program on FGM/C, which represents the largest international effort to accelerate abandonment of this practice. The UNFPA-UNICEF Joint Program brings together both agencies’ expertise, often with grassroots community organizations, using a human rights-based approach to engage communities to act collectively to abandon the practice. The Joint Program also supports health and protective services for those who have undergone FGM/C. Donor countries make annual contributions directly to the Joint Program. During phase I of the Joint Program (2008-2013), 15 countries participated (see fig. 2). According to the 2013 evaluation of the Joint Program, funding limitations reduced the number of countries involved during phase 1. The overall budget for phase I was about $41 million over its 5 years. Phase II (2014-2017) currently is under way in 17 countries— the original 15 countries, as well as Yemen and Nigeria. Phase II aims for a 40 percent decrease in prevalence among girls 14 and younger in at least 5 countries, with at least 1 country declaring total elimination of the practice, by the end of 2017. The Joint Program estimated its budget for phase II to be $54 million over 4 years. Under the U.S. foreign policy framework, FGM/C is identified as a form of gender based-violence. In March 2012, USAID released its Gender Equality and Female Empowerment Policy, which provides guidance on incorporating gender issues—including gender-based violence—into development programming. In addition, State and USAID jointly developed the U.S. Strategy to Prevent and Respond to Gender-Based Violence Globally, released in August 2012. These two documents identify FGM/C as a form of gender-based violence but do not provide any specific guidance on assistance related to FGM/C. In addition, the Secretary of State announced in March 2016 the release of the United States Global Strategy to Empower Adolescent Girls, which includes the goal of reducing girls’ vulnerability to gender-based violence. The strategy highlights FGM/C as a form of gender-based violence. State’s and USAID’s set of standard indicators, developed to assess foreign assistance, includes nine standard indicators related to gender issues. Three of the nine indicators cover gender-based violence, which includes FGM/C; however, the indicators do not specify the type of gender-based violence addressed. In 2000, USAID released guidance on FGM/C, incorporating this issue into its development agenda. USAID updated the guidance in February 2016, during the course of our review. The guidance recognizes FGM/C as a harmful, traditional practice that reflects deep-rooted gender inequalities and constitutes an extreme form of discrimination against women. The guidance states that USAID will support the integration of efforts to combat FGM/C into all aspects of the USAID program cycle where feasible and appropriate. It also states that USAID will assist countries in implementing their laws prohibiting FGM/C and support community-based programming to raise awareness of the harmful effects of this practice to reduce demand. USAID officials stated that the agency also plans to develop a resource guide on FGM/C that provides information for USAID missions and staff on how best to incorporate efforts to address FGM/C into their programming. FGM/C has persisted into the 21st century despite UN resolutions condemning this practice and the passage of laws banning it in many countries where it is prevalent, as UNICEF has reported. Recent U.S. and UN studies of efforts to address FGM/C have identified several factors contributing to its prevalence, including its power as a social norm, the belief that FGM/C is a religious obligation, the medicalization of the practice, and challenges enforcing existing laws. FGM/C is a powerful social norm—what communities believe and how they act and expect other members of that community to act—making its abandonment difficult, according to PRB and UNICEF. FGM/C is embedded in the culture and beliefs of many communities and ensures membership in these communities, according to PRB. If most families in a community practice FGM/C, it is difficult for an individual family to abandon the practice, according to UNICEF. UNICEF also reported that, among surveyed girls and women aged 15 to 49, the most commonly reported benefit of FGM/C is that it ensures social acceptance. Because some communities view FGM/C as a social norm, it is viewed as a caring act. Parents may believe that FGM/C is in the best interest of their daughter, despite the physical harm it causes, in order to avoid social exclusion, according to UNICEF. In addition, the practice is exacerbated by poverty and poor education. FGM/C may signal that a girl is ready for marriage, which can spare a family the girl’s school expenses. Parents also may rely on the money received for marriages, according to an evaluation of the UNFPA and UNICEF Joint Program on FGM/C. In addition, prevalence is highest among daughters of women with no education and declines as the mother’s education level rises, according to UNICEF. The common belief that FGM/C is a religious obligation is a misconception, but one that contributes to its continued use, according to UNICEF. UNICEF notes that FGM/C is not mandated in any religious texts and predates the birth of Islam and Christianity. Scholars and activists have concentrated on demonstrating the lack of support within scriptures. However, the religious motivation for FGM/C is often intertwined with social norms and tradition, according to UNICEF. In addition, some communities believe the practice is a religious requirement that makes a girl spiritually “pure,” according to UNICEF. Thus, many who continue practicing FGM/C often cite religion as their motivation. In 4 of 14 countries surveyed, more than 50 percent of girls and women aged 15 to 49 regard FGM/C as a religious obligation, according to UNICEF. These countries were Mali, Eritrea, Mauritania, and Guinea. Another challenge in encouraging abandonment of FGM/C is the medicalization of the practice, which contributes to its perceived legitimacy, according to PRB. Medicalization refers to the performance of FGM/C by health care providers rather than traditional practitioners. According to PRB, 18 percent of girls and women worldwide who have been cut had the procedure performed by medical professionals. UNICEF reports that this percentage can be much higher in certain countries; for example, 77 percent of girls in Egypt and 41 percent of girls in Kenya who underwent FGM/C were cut by medical professionals. UNICEF found that medicalization of FGM/C may have increased as a result of the assistance community’s earlier focus on the harmful health risks of the practice to encourage abandonment of FGM/C without also framing it as a human rights issue. Thus, early FGM/C prevention campaigns may have inadvertently contributed to the perception that FGM/C would be acceptable if performed by medical professionals, thus institutionalizing the practice within the medical community, according to UNICEF. Although many countries have passed laws addressing FGM/C, enforcing these laws is a challenge, according to the Joint Program evaluation. The evaluation found that in many countries, there is a lack of resources, difficulty reaching remote areas, and limitations on the capacities of law enforcement agents. In addition, the implementation of anti-FGM/C laws may be undermined by a lack of awareness by local officials and law enforcement, and a lack of community buy-in, according to PRB. Further, the threat of social exclusion from being uncut may be more influential than the threat of legal punishment, according to UNICEF. U.S. and UN studies have identified a variety of approaches to accelerating FGM/C abandonment. These include efforts to increase awareness and enforcement of laws against FGM/C; establish community education programs; and provide outreach to a variety of community members, including religious leaders, elders, men and boys, and medical practitioners. Studies also highlight the importance of incorporating FGM/C into broader gender equality and human rights programs, and encouraging community actions such as alternative rites of passage and public declarations for abandonment. The existence of a law can help abandonment efforts, but other interventions at the community level must also be undertaken for the law to be effective, according to PRB and UNICEF. PRB noted that resources are needed after the adoption of anti-FGM/C policies to ensure awareness and enforcement of the law. For example, in Burkina Faso, the Joint Program helped raise awareness of FGM/C laws for personnel in the justice sector, informing them about current policies and their implications for their work. In addition, in Uganda, the Joint Program supported six community policing sessions that provided communities with information on existing laws and helped ensure their implementation. The growing interest and understanding of the law within the communities led to the arrest of two cutters, according to the program’s focal point in Uganda. Education is an important way to raise awareness about the dangers of FGM/C and its impact as a social norm. Community education programs play an essential role in encouraging communities to reconsider the practice, according to UNICEF. A community education project could last for a number of years and include a wide range of participants such as government officials, media, health professionals, and at-risk girls. Communities are encouraged to reflect on the role of women and girls and how FGM/C affects their lives. Educational activities and community dialogues create a safe, non-threatening environment where people can evaluate their beliefs regarding FGM/C, according to the Joint Program evaluation. Events may focus on FGM/C specifically or may combine information on FGM/C with information on health, religion, or human rights. Efforts to abandon FGM/C are strengthened when a wide range of actors—including religious leaders, and boys and men—are included in community education, according to PRB. Because religion is often cited as a reason for continuing the practice, engaging religious leaders in public education can be effective in encouraging abandonment, according to a Joint Program evaluation. Religious leaders often already have the community’s respect and can be a powerful influence on dispelling the belief that there is a religious obligation. Religious leaders in one Ethiopian community participated in public discussions about abandonment, according to a UNICEF study. By the end of the sessions, six of seven villages pledged to abandon the practice, and religious leaders led a special prayer binding the decision. Working with religious leaders is a core strategy and a critical component of community engagement in the Joint Program as well. For example, the Joint Program reported that, through its efforts, 304 religious leaders were educated about FGM/C in Mauritania. Involving boys and men in outreach efforts is also essential in ending FGM/C. In about half of the countries where FGM/C is prevalent, men outnumber women in their opposition to FGM/C, according to a Joint Program report. For example, 42 percent of boys and men in Guinea think FGM/C should stop, compared to 19 percent of girls and women, according to UNICEF. The Joint Program increased its efforts to engage men and boys in 2014, resulting in their voices against FGM/C becoming more prominent on social media, according to a Joint Program report. For instance, male advocacy emerged in Somalia where men have posted their support of uncut women on Facebook, stating, “Don’t do it for us.” Often, women are misinformed about their husbands’ opinions on FGM/C. Men may not talk about FGM/C because it is considered a “women’s issue.” Open dialogue between the sexes could reduce this ignorance, according to UNICEF. Intergenerational dialogue is another approach to changing behaviors, according to PRB. This approach recognizes that the older generations’ full engagement is needed, given their role as decision-makers and gatekeepers, and because they are likely to feel threatened by changing traditions. For example, the Grandmother Project in Senegal increased participants’ appreciation of positive cultural traditions and changing attitudes towards harmful traditions, according to PRB. Because senior women are viewed as valuable cultural resources and influential members of their communities, communities are more comfortable with an approach that includes dialogue between elders and youth, according to PRB. Media campaigns can also educate the public on the harmful effects of the practice and can shape the public discourse around FGM/C, according to the Joint Program evaluation. They can help spread information on decreasing support for FGM/C. Radio, in particular, enables the dissemination of information to remote villages and illiterate populations. Forums for discussion can include talk shows, documentaries, and educational TV. Social media is particularly effective with adolescents and can be instrumental in spreading information. In all 15 countries in the Joint Program’s first phase, programs used media to increase awareness of the practice’s harmful effects and encourage abandonment. Media campaigns, like other approaches, have the greatest impact when they are part of a larger effort, according to PRB. Figure 3 shows a road sign promoting the campaign against FGM/C in Uganda. Education and training for medical professionals can encourage them to help prevent FGM/C and also prepare them to provide appropriate treatment for those who have undergone FGM/C procedures. In response to medicalization, the Joint Program began working with the WHO to ensure medical professionals’ support for FGM/C abandonment. The Joint Program’s first phase prioritized integrating FGM/C prevention into antenatal and neonatal care and immunization services in countries where a large portion of girls are cut between birth and age 5. Specifically, during phase I, a total of 5,571 health facilities integrated FGM/C prevention into their antenatal and postnatal care, and more than 100,000 doctors, midwives, and nurses have participated in training on integrating FGM/C prevention, response, and care into their services. In all 15 Joint Program countries, medical staff were trained on the negative consequences of FGM/C and, in many cases, how to treat medical complications from the practice. Such training has strengthened the medical community’s capacities for preventing and responding to FGM/C. In 2014, about 200,000 girls and women received prevention, protection, or care services relating to FGM/C through the Joint Program, according to the Joint Program report. The report also noted that, as a result of an initiative led by the Joint Program, Djibouti is the first African country where girls are physically examined for evidence of FGM/C during routine check-ups. FGM/C should be addressed as part of broader efforts to promote gender equality and female empowerment, according to PRB. In the Joint Program, FGM/C is approached as one of many forms of gender-based violence. In addition, the Joint Program highlights the intersection between FGM/C, women’s reproductive health, and girls’ education. By addressing the practice as part of broader issues, interventions are able to address how existing practices negatively affect opportunities for women and girls. When FGM/C is incorporated into programming that challenges assumptions about gender relationships, it directly advances broader goals of reducing gender inequality and gender-based violence, according to UNICEF. Increasingly, discussions about FGM/C have been shaped within a human rights approach, which can lead to public declarations against FGM/C in thousands of communities, according to UNICEF. Human rights vocabulary needs to be adapted for use by its program participants and it should include relevant symbols, narratives, or religious language so that it resonates with the local community. The Joint Program incorporates issues of gender equality and human rights in the design and implementation of its efforts. It has simultaneously conceptualized FGM/C as an abuse of human rights and a form of gender-based violence while also seeking to be culturally sensitive to the value the practice holds in many communities. A Joint Program report highlighted alternative rites of passage as an effective means of abandoning FGM/C. In certain communities, rites of passage have for centuries marked the transition from child to adult, according to the Joint Program evaluation. For girls, that rite of passage is often combined with FGM/C. Some communities may be reluctant to abandon FGM/C because they are reluctant to give up this rite of passage ceremony. In Kenya, thousands of girls have participated, since 2008 in alternative rites of passage to encourage abandonment while preserving this tradition, according to a Joint Program report. The effort typically involves sending the girls away for a week to an orientation program that includes teaching about the harmful effects of FGM/C. The Joint Program report noted that in Kenya in 2014, the Joint Program supported an alternative rites of passage program for more than 1600 girls. This program involved final celebrations that included certificates of recognition for the commitment to stay uncut. Figure 4 shows an alternative rite of passage ceremony in Kenya. Expressing public commitment to stop the practice of FGM/C is a promising approach to abandonment, according to several studies. Village-level declarations are one way to measure a program’s impact on FGM/C, according to PRB. Public declarations encouraged by the Joint Program are typically preceded by community discussions and engagement with community leaders and members. Public declarations do not guarantee a change in behavior, but they do have an influence on social norms, according to the Joint Program evaluation. A public commitment applies social pressure that makes it difficult to return to old behaviors. In Egypt and Senegal, public commitments to end FGM/C occurred only after human rights discourse was introduced into basic education curricula, according to UNICEF. A 2008 UNICEF evaluation of a public declaration program in Senegal found that prevalence dropped by more than half in villages that had taken public pledges to abandon the practice. Since 2008, when the Joint Program was established, nearly 10,000 communities in 15 countries, representing about 8 million people, have renounced the practice. State and USAID currently have limited international assistance efforts to address FGM/C. In 2014, State and USAID each had one active standalone project to address FGM/C. In addition, we identified projects with broader goals that included components to address FGM/C but we were unable to determine the full extent of FGM/C-related efforts because State and USAID do not specifically track these efforts. USAID has competing development priorities, which leaves little funding available for FGM/C-related efforts, according to USAID officials. The largest current international assistance effort to address FGM/C is the UNFPA/UNICEF Joint Program on FGM/C. State provides funding to UNFPA and UNICEF but, to date, has not contributed to this Joint Program. However, if the general restrictions for UNFPA funding are met, there are currently no specific legal restrictions that would prohibit U.S. funding provided to UNFPA from being available for the Joint Program on FGM/C. State’s one standalone FGM/C program is in Guinea, where 97 percent of girls and women aged 15 to 49 have undergone FGM/C. This program is funded by about $1.5 million in grants from the Full Participation Fund, an initiative created by State and funded through various appropriations accounts to support gender integration efforts. It began in the fall of 2014 and will run through April 2016, according to State officials. Through partnerships with the government of Guinea, UNICEF, and 26 local civic and human rights organizations, the U.S. Embassy in Conakry established nationwide educational and media campaigns that engage policymakers, health professionals, traditional excisors, religious leaders, and the general public to abandon FGM/C. Activities include establishing a National Strategic Plan to abandon FGM/C in line with existing legal frameworks, capacity building and specialized training of institutions and individuals combating FGM/C, and support of multimedia information and communication awareness campaigns. U.S. embassy staff are responsible for monitoring the project, which has 13 performance indicators. Examples of the performance indicators include the number of girls and women identified as abandoning FGM/C practices; the reduction in the number of group excision ceremonies held in targeted districts and villages; and the number of people trained by the U.S.-funded intervention providing gender-based violence services relating to FGM/C (e.g., law officers, judges, teachers, excision practitioners, health workers, religious leaders, policymakers, and potential victims). State reported in July 2015 that the campaign had led to approximately 265 villages in Guinea voluntarily and publicly denouncing this harmful practice since the start of 2015. State plans to conduct a separate impact evaluation in 2017, according to State officials. State provides funding to international organizations and non- governmental organizations to provide assistance to vulnerable populations in refugee settings overseas to meet their basic needs, including programs providing water and sanitation, shelter, and healthcare, as well as programs to prevent and respond to gender-based violence. In fiscal year 2014, State’s Bureau of Population Refugees and Migration (PRM) awarded about $35.7 million for 93 cooperative agreement awards to projects focused on or including gender-based violence activities. State officials told us that some of these projects may include assistance related to FGM/C; however, State does not capture this level of programmatic detail for these projects. We contacted project implementers for nine of the largest gender-based violence projects in countries where FGM/C is prevalent and found that two of them provided assistance related to FGM/C. One of these projects, which received $800,000 from State in fiscal year 2014, provided gender-based violence assistance to Central African Republic Refugees and Chadian returnees in Southern Chad, including education and awareness-raising about FGM/C with project beneficiaries and local law enforcement authorities. The project also provided specialist referral services to individuals who have undergone FGM/C. The other project, which received $1,000,000 from State in fiscal year 2014, was focused on prevention and response to gender-based violence for refugees in Uganda. This project included focus group discussions and interviews with selected members of the Somali refugee community in Uganda to raise awareness about the negative effects of FGM/C. State provides annual funding to UNFPA and UNICEF but, to date, none of this funding supports the Joint Program on FGM/C, the largest international effort to address FGM/C. In fiscal year 2014, the U.S. government provided funding to UNFPA and UNICEF that included general contributions to be used at the UN agencies’ discretion in support of their overall missions, as well as contributions pledged to specific projects such as humanitarian relief efforts, according to State officials. Congress routinely places restrictions on U.S. funding in annual appropriations for UNFPA. However, if the general restrictions for UNFPA funding are met, there are currently no specific legal restrictions that would prohibit U.S. funding provided to UNFPA from being available for the Joint Program on FGM/C. State and UNFPA officials agree that the restrictions on UNFPA funding would not stop the U.S. government from funding the Joint Program if it chose to devote funds to it. State officials told us that the Joint Program, which is a long-term effort, may not have been considered for targeted contributions to UNFPA because those funds are generally provided in response to short-term humanitarian appeals. However, on March 15, 2016, the Secretary of State announced that State intends to contribute to the UNFPA – UNICEF Joint Program on FGM/C. State depends on its embassies to use diplomacy to encourage abandonment of FGM/C, according to State officials. State officials from the Bureau of African Affairs provided several examples in which U.S. embassies engaged diplomatically with local communities to raise awareness or provide training about FGM/C. For example, in Chad, Central African Republic, Ethiopia, and Niger, U.S. embassies hosted a screening of a film about FGM/C for student or women’s groups to encourage abandonment of this practice. Some of these screenings were held to commemorate the International Day of Zero Tolerance for FGM/C, which occurs every year on February 6th. In Eritrea, the embassy held a Zero Tolerance Day event, displaying posters and distributing brochures on FGM/C. Since 2012, State’s annual Country Reports on Human Rights Practices have included information on FGM/C, according to State. State is required to report on the status of internationally recognized human rights for all countries receiving assistance and all United Nations member states. Since 2012, State has expanded the reports’ coverage to include multiple forms of gender-based violence, including FGM/C and child, early, and forced marriage. Among other things, the 2014 human rights reports we reviewed identified countries’ prevalence rates of FGM/C, common types of FGM/C, legal restrictions of FGM/C, and educational efforts undertaken to raise awareness about the dangers of this practice. Tracking host government actions and policies related to FGM/C as part of human rights reporting helps State build the knowledge necessary to diplomatically encourage actions to end this practice, according to State officials. In addition, State officials noted that the Department of the Treasury relies on this information to advise the United States Executive Director of each international financial institution, such as the World Bank, regarding whether they should support loans to countries where FGM/C is practiced. USAID has competing development priorities, which leave little funding available for FGM/C-related efforts, according to USAID officials. For example, all Global Health Programs account funds are programmed to achieve outcomes in three priority areas in the health sector—ending preventable child and maternal deaths, creating an AIDS-free generation, and protecting communities from other infectious diseases. In addressing these goals, funds are used first for programs expected to have the greatest impact in achieving them, according to USAID officials. Congressional reports accompanying appropriations laws for USAID funding included specific funding for FGM/C in 2000 and 2005, but no such report language currently exists. In 2000, a conference report included language directing USAID to make $1.5 million available to develop educational programs aimed at eliminating FGM/C. In 2005, the Senate Committee on Appropriations recommended that USAID spend $5 million to expand community-based efforts to combat FGM/C in high-prevalence countries. Recent congressional committee reports have directed that USAID provide funding to address obstetric fistula, which often occurs among populations of girls also at risk of FGM/C. We identified one free-standing FGM/C program that was active during calendar year 2014. USAID supported the start-up of Nairobi University's Africa Coordinating Center for the Abandonment of FGM/C (ACCAF) to advocate, educate, and create a supportive environment for cultural change; support networking and knowledge exchange between researchers, health professionals, and community workers on the abandonment of FGM/C; identify knowledge gaps and support and stimulate research in the field of FGM/C; and improve health care for women and children who have undergone FGM/C. The program runs from October 1, 2013, through September 30, 2016, and has a total funding level of $429,000 from the Global Health Programs appropriations account, according to USAID officials. It was funded as a subaward to the University of Nairobi from an existing implementing partner in Kenya. The Center carried out four community trainings in fiscal year 2014 involving 114 community leaders, community professionals, health care providers, FGM/C practitioners, FGM/C survivors, and youths, according to its 2014 annual report. These 2-day training sessions addressed the different dimensions of FGM/C and helped prepare community members to advocate FGM/C abandonment to the broader community. The Center also supported advocacy of FGM/C abandonment in the media, as well as networking between researchers and health professionals, and has initiated studies of various aspects of the issue. As required in the subaward agreement, the ACCAF developed a performance monitoring plan that included 20 indicators. Examples of these indicators include “number of advocacy teams created” and “number of community-based providers…trained or supported.” In addition, the award agreement requires the ACCAF to produce, among other things, an impact assessment within 90 days after the project end date, which includes a summary of lessons learned, success stories, and conclusions about areas in need of future assistance. We found several examples of USAID projects—active in calendar year 2014—addressing broader Global Health or Democracy and Governance objectives that had intervention elements related to FGM/C. USAID officials told us, however, that they could not separate the level of funding for FGM/C efforts from other project activities. In addition, USAID could not verify the extent to which these examples represented all FGM/C- related efforts undertaken by missions in high-prevalence countries. USAID’s systems for tracking funding and programming gender-based violence efforts do not capture subactivities as specific as FGM/C efforts, according to USAID officials. For example, standard indicators developed by USAID and State to track the performance of assistance efforts include three indicators on gender-based violence prevention and response but do not specify the type of gender-based violence, such as FGM/C. Table 1 shows countries where USAID missions identified having projects active in calendar year 2014 with FGM/C-related efforts. USAID created a publicly available e-learning course on FGM/C designed for those implementing interventions to address this practice, including the staff of U.S. government agencies and nongovernmental organizations. The 2-hour and 30-minute course provides an overview of FGM/C, including definitions, medical risks from undergoing the procedure, prevalence, promising interventions, and lessons learned from studies of intervention efforts to prevent and respond to FGM/C. The course was first published in October 2008 and was last updated in October 2015. We provided a draft of this report to State and USAID for their review. State and USAID did not provide formal comments but each provided technical comments that we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of the report earlier, we plan no further distribution until 30 days after the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of State, and the USAID Administrator. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This is the first of two reports examining U.S. agencies’ efforts to address Female Genital Mutilation/Cutting (FGM/C) at home and abroad. This report (1) summarizes findings from recent U.S. and United Nations (UN) studies about the factors contributing to FGM/C and approaches to addressing this practice internationally and (2) examines the Department of State’s (State) and the United States Agency for International Development’s (USAID) current efforts to address FGM/C abroad. A second report will review U.S. efforts to address FGM/C domestically. To identify factors contributing to FGM/C and current approaches to addressing this practice, we reviewed recent U.S. and UN studies of international efforts to accelerate the abandonment of FGM/C and respond to victims of the practice. We selected studies that examined assistance efforts to accelerate abandonment of FGM/C in countries where this practice is concentrated. The studies were published in 2010 or later. We examined a USAID-funded Population Reference Bureau study, released in an evaluation of the United Nations Population Fund (UNFPA)-United Nations Children’s Fund (UNICEF) Joint Program on FGM/C, released in 2013; a summary report of the first phase of the UNFPA-UNICEF Joint Program released in 2014; a UNICEF statistical overview of FGM/C, released in 2013; and a UNICEF review of efforts to accelerate FGM/C abandonment in five African countries, released in 2010. To determine State’s and USAID’s current efforts to address FGM/C abroad, we analyzed applicable strategy and policy documents and interviewed State and USAID officials involved in issues related to FGM/C. These strategies and policies include: USAID’s Gender Equality and Female Empowerment Policy, March State’s and USAID’s United States Strategy to Prevent and Respond to Gender-Based Violence Globally, August 2012; USAID’s Child, Early, and Forced Marriage Resource Guide, USAID’s USAID Guidance on Female Genital Mutilation/Cutting, updated February 2016; and multiple agencies’ United States Global Strategy to Empower Adolescent Girls, March 2016. To identify State’s efforts, we interviewed State officials in the Office of Global Women’s Issues, and key bureaus including the Bureaus of Population, Refugees, and Migration; Democracy, Human Rights, and Labor; African Affairs; and Near Eastern Affairs. We reviewed documents related to a State-funded FGM/C prevention program in Guinea. We also reviewed a list of 54 projects addressing gender-based violence in refugee settings overseas that received State funding in fiscal year 2014. We identified the 10 largest of these projects (with State funding of $500,000 or more) in countries where FGM/C is prevalent. For 9 of these projects, we contacted project implementers via e-mail to determine if the projects had any FGM/C-related components. We were unable to contact the project implementers for 1 of the 10 projects. We also met with officials from UNFPA and UNICEF to discuss U.S. funding for these agencies and their Joint Program on FGM/C. In addition, we reviewed State’s Country Reports for Human Rights to determine how they addressed FGM/C issues. To identify USAID’s efforts, we interviewed USAID’s Senior Coordinator for Gender Equality and Women's Empowerment and officials in key USAID bureaus including Global Health; Democracy, Conflict, and Humanitarian Assistance; Africa; and the Middle East. We also collected information on projects with FGM/C components from USAID’s overseas missions in countries where FGM/C is prevalent. To obtain this information, we worked with USAID staff in the Office of the Senior Gender Coordinator to ask relevant USAID missions via e-mail to identify any FGM/C-related programming that was active in calendar year 2014. While 12 missions reported having programs with FGM/C components in 2014, we only presented information on the 5 projects that we were able to independently confirm as having FGM/C-related components through searches on websites of USAID’s missions or implementing partners. At the time of our request, Indonesia had not been identified as a country where FGM/C was prevalent, and therefore, the USAID mission there was not included among those contacted. We conducted this performance audit from June 2015 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The type of female genital mutilation/cutting (FGM/C) commonly practiced varies by country, according to data presented by the United Nations Children’s Fund, based on surveys of mothers about FGM/C performed on their daughters (see table 2). For example, more than 20 percent of girls who underwent FGM/C in Somalia, Eritrea, Niger, Djibouti, and Senegal experienced infibulation (Type III)—the most radical form of FGM/C. In other countries, infibulation was uncommon. For example, in Egypt, mothers reported that infibulation represented 2 percent of cases. The United Nations Children’s Fund (UNICEF) reported in 2013 that 24 countries where Female Genital Mutilation/Cutting (FGM/C) is prevalent have enacted legislation related to FGM/C. These laws reportedly vary in their scope. UNICEF reports that some ban the practice only in medical facilities; others ban the practice anywhere. In addition to the contact named above, Leslie Holen (Assistant Director), Ashley Alley, Lynn Cothern, Howard Cott, Jill Lacey, and Nancy Santucci made significant contributions to this report. | More than 200 million girls and women alive today have undergone FGM/C in the 30 countries where available data show this harmful practice is concentrated. More than 3 million girls are estimated to be at risk for FGM/C annually in Africa. FGM/C comprises all procedures that involve partial or total removal of the external female genitalia, or other injury to the female genital organs. It is rooted in the cultural traditions of many communities but has several adverse health consequences and the UN identifies it as a violation of human rights. In 2015, the UN General Assembly adopted a set of 17 Sustainable Development Goals for 2030 that included the elimination of FGM/C among its targets. UNFPA and UNICEF implement the Joint Program on FGM/C in 17 countries—the largest current international assistance effort to address FGM/C. State and USAID include FGM/C as part of their global strategy to respond to gender-based violence. GAO was asked to review State's and USAID's efforts to address FGM/C abroad. This report (1) summarizes findings from recent U.S. and UN studies about factors contributing to FGM/C and approaches to addressing this practice and (2) examines State's and USAID's current efforts to address FGM/C abroad. GAO reviewed recent UN and USAID studies on assistance efforts to address FGM/C, analyzed related strategies and policies, and interviewed State and USAID officials. GAO also analyzed information on FGM/C-related projects and activities from USAID's overseas missions, and State and USAID bureaus. GAO is making no recommendations in this report. U.S. and United Nations (UN) studies since 2010 have identified a variety of factors contributing to the persistence of female genital mutilation/cutting (FGM/C). In many communities where FGM/C is prevalent, FGM/C is an influential social norm that ensures social acceptance and is commonly perceived as a religious obligation. In addition, medicalization of the practice— when it is performed by health care providers rather than traditional practitioners—increases the perception of legitimacy in some countries. Although the United Nations Children's Fund (UNICEF) reports that many countries where FGM/C is prevalent have passed laws banning the practice, enforcement is a challenge. The studies also have identified key approaches to addressing FGM/C, including efforts to implement community education programs, outreach and training for medical professionals, and the inclusion of FGM/C in broader gender equality and human rights programs. U.S. assistance efforts to address FGM/C are limited. The Department of State (State) and the U.S. Agency for International Development (USAID) each had one active standalone project in 2014, and the agencies also undertook some FGM/C-related efforts as components of projects with broader assistance goals. In addition, the U.S. government provides funding to the United Nations Population Fund (UNFPA) and UNICEF but, to date, has not contributed funds to the UN agencies' Joint Program on FGM/C. If congressional restrictions for UNFPA funding (such as the requirement for UNFPA to maintain U.S. funds in a separate account) are met, there are currently no specific legal restrictions that would prohibit U.S. funding provided to UNFPA from being available for the Joint Program on FGM/C. Competing development priorities, such as HIV/AIDS, leave little funding specifically for FGM/C, according to USAID officials. |
U.S. laws authorize the imposition of AD/CV duties to remedy unfair trade practices of other countries and foreign companies that cause material injury (or threat thereof) to domestic industries, namely dumping (i.e., sales at less than fair market value), and countervailable foreign government subsidies. The AD/CV duty laws implement U.S. international obligations under the World Trade Organization (WTO) Agreement on Antidumping, and the Agreement on Subsidies and Countervailing Measures. Should the United States impose AD/CV duties on a product, the government of the exporting country may institute dispute resolution proceedings against the United States pursuant to the WTO Understanding on Dispute Settlement if it believes that the United States has violated its obligations under the WTO agreements. Antidumping duty. AD duty law provides relief to a domestic industry that is materially injured, threatened with material injury, or whose establishment is materially retarded by reason of imports sold in the United States at less than fair value. The law provides relief by authorizing the imposition of an additional import duty on the dumped imports. U.S. trade law permits the imposition of AD duties if (1) Commerce determines that the imported goods are or are likely to be sold in the United States at less than fair value; and if (2) ITC determines that a U.S. industry is materially injured or threatened with material injury, or that the establishment of an industry in the United States is materially retarded, by reason of imports of that merchandise. Countervailing duty. CV duty law provides a similar kind of relief to a domestic industry that is materially injured, threatened with material injury, or whose establishment is materially retarded by reason of imported goods that have received certain foreign government subsidies. The law provides relief by authorizing the imposition of an additional import duty on the subsidized imports. U.S. trade law provides that CV duties will be imposed if (1) Commerce determines that the foreign government or any public entity within the foreign country is providing, directly or indirectly, a countervailable subsidy with regard to the manufacture, production, or export of the subject merchandise that is imported or sold (or likely to be sold) for importation into the United States; and (2) if in the case of merchandise imported from a Subsidies Agreement country, ITC determines that a domestic industry is materially injured or threatened with material injury, or that the establishment of a domestic industry is materially retarded, by reason of imports or sales for imports of those goods. The process for obtaining the imposition of an AD/CV duty generally involves petitioners and interested parties who support and oppose the petition, trade law firms, Commerce, and ITC. Petitioners and interested parties in support of the petition may include domestic manufacturers, producers or wholesalers, and certain unions and trade associations. Parties in opposition to the petition for the imposition of duties may include foreign exporters and producers, U.S. importers of the articles under investigation, and governments of the exporting countries. Law firms that specialize in international trade frequently represent petitioners and the opposing parties before Commerce and/or ITC, the two agencies responsible for conducting AD/CV duty investigations. Commerce determines whether to initiate an AD or CV duty investigation after examining a petition filed on behalf of a domestic industry. Commerce conducts an investigation of dumping and/or subsidies while ITC simultaneously conducts a separate investigation of material injury to a domestic industry. Both Commerce and ITC make preliminary and final determinations before Commerce imposes an AD or CV duty. According to Commerce, the process for determining whether to impose an AD or CV duty consists of two key phases: (1) petition and (2) investigation. During the petition phase, a prospective petitioner gathers and presents information that might provide a reasonable basis for Commerce to believe that dumping or subsidization of a particular product might be occurring and causing or threatening material injury to a domestic industry, according to Commerce officials. Before deciding whether to petition for the imposition of AD/CV duties, the prospective petitioner considers the costs and benefits of doing so, including the time, administrative requirements, and legal costs associated with the process. U.S. law specifies that the petition allege the elements necessary for the imposition of the AD or CV duty. Commerce and ITC regulations require prospective petitioners seeking the imposition of AD/CV duties to provide detailed information, which is reasonably available to the petitioner, in their petition to Commerce and ITC. This information includes the composition of the domestic industry, identity of importers, volume and value of production of the domestic like product by the petitioner and each U.S. producer, and information concerning material injury. Prospective petitioners are also required to include information such as the proportion of total exports to the United States that the petitioners believe each producer is selling at less than fair value or benefiting from countervailable subsidies accounted for during the most recent 12-month period. For AD duty petitions, prospective petitioners should provide pricing and cost information relevant to calculating dumping margins. For CV duty petitions, prospective petitioners should provide factual information relevant to the alleged countervailable subsidy. In addition, prospective petitioners must demonstrate that they have sufficient support from the domestic industry. As prospective petitioners and their legal representatives contemplate whether or not to file a petition, they may request information or assistance from specialized offices within Commerce and ITC, described below: Commerce Import Administration’s Petition Counseling and Analysis Unit. Staff in this unit of Commerce’s International Trade Administration are available to help companies understand U.S. trade remedy laws dealing with dumping and countervailable foreign government subsidies and to provide technical assistance with preparing and filing a petition. According to Commerce officials, the department budgeted about $440,000 plus agency overhead for the Petition Counseling and Analysis Unit in fiscal year 2012. ITC’s Office of Investigations and Trade Remedy Assistance Office. Office of Investigations staff are available to counsel all companies seeking assistance in understanding the injury phases of AD/CV duty investigations, and regularly provide technical assistance and prepetition counseling to all companies, including SMEs. According to ITC officials, the agency dedicated about $59,500 plus agency overhead for prepetition counseling and assistance in 2012. The Trade Remedy Assistance Office was established to provide eligible small businesses, small trade and worker associations, and their representatives with additional information and support. According to ITC officials, the agency dedicated about $74,000 plus agency overhead for the Trade Remedy Assistance Office in fiscal year 2012. Once a petition is filed, Commerce has sole authority to initiate or not initiate an investigation based on its examination of the petition. In the 20 calendar days after a petition is filed, Commerce examines the proposed scope of the investigation, the domestic like product, industry support for the petition, the adequacy of the dumping or subsidy allegation(s), and the information provided to demonstrate injury, according to Commerce officials. If Commerce decides not to initiate an investigation at this point, the case is closed. During the investigation, Commerce sends a questionnaire to selected foreign producers and exporters (and the foreign government, in the case of a CV duty investigation), to collect information for its determination of whether imports are being dumped or subsidized. Commerce issues supplemental questionnaires as needed, to clarify certain information or obtain additional information. To establish the adequacy and accuracy of information submitted in response to questionnaires and other requests for information, Commerce conducts an on-site examination of the records of the party that provided the information and interviews company personnel who prepared the questionnaire responses and are familiar with the sources of the data in the responses. Commerce uses the information obtained to determine the appropriate amount of duty. According to Commerce, the agency may hold a hearing, upon request, to provide parties with an opportunity to express positions and respond to agency questions about factual and legal issues in the case. During its simultaneous investigation, ITC sends out separate, detailed questionnaires to collect trade, pricing, and market data from all U.S. producers, U.S. importers, and foreign producers of the product under investigation to determine whether these imported goods are causing material injury to the domestic industry. These questionnaires ask responding U.S. producers to indicate whether they support, oppose, or take no position on the petition. According to ITC officials, all data submitted by firms in questionnaires are treated as business proprietary, and thus, individual producer responses with regard to support of the petition are confidential. During the investigation, ITC holds a hearing that allows petitioners, other domestic producers, and opposing parties, who are typically represented by legal counsel, to express their position on the case and respond to questions that ITC Commissioners may have about factual information or legal issues in the case. Some parties within an industry may not support a petition for an AD/CV duty or at least not publically support a petition for a variety of reasons. For example, the product under investigation may be an input used by a domestic manufacturer or the distributor may represent domestic and foreign producers. Once a petition is filed, the length of time for completing an AD/CV duty investigation can range from 205 to 420 days, depending on whether it is an AD, CV, or joint AD/CV duty investigation, and the number of extensions applied as permitted under U.S. law. Generally, ITC makes a preliminary decision regarding material injury within 45 days after a petition is filed. If the ITC preliminary decision is negative, the investigation is terminated. If the ITC determination is affirmative, Commerce generally makes a preliminary decision regarding whether imports are being dumped or subsidized within 140 days after initiation for an AD duty case and 65 days for a CV duty case. Even if Commerce makes a negative preliminary decision, the investigation continues until Commerce makes its final decision, generally within 75 days of its preliminary decision. If Commerce’s final decision is negative, then the investigation is terminated and no further investigative action is taken by either agency. If Commerce’s final decision is affirmative, ITC generally makes a final injury decision within 45 days. If ITC’s final decision is affirmative, Commerce issues an AD/CV duty order within 7 days. Figure 1 shows the petition and investigation phases of the AD/CV duty process and the associated time frames. Our analysis showed that some SMEs petitioned for the imposition of AD/CV duties to seek relief from unfair trade practices; over the past 6 years, about one-third of the petitions for AD/CV duties listed SME petitioners and about a quarter of the total number of petitioners were SMEs. As figure 2 (left) shows, from 2007 through 2012, a total of 56 petitions were filed for the imposition of AD/CV duties. Of these, 21 petitions (38 percent) listed at least one SME petitioner, with 8 petitions listing only SME petitioners, and 13 petitions listing both SME and non- SME petitioners. The majority of the petitions (36 petitions, or 63 percent of the 56 total) represented non-SME petitioners. As figure 2 (right) shows, SMEs represented about a quarter of the 147 petitioners named in the 56 petitions filed for the imposition of AD/CV duties from 2007 through 2012. Of these 147 petitioners, 38 (26 percent) were SMEs. The majority (21) of the 37 SME petitioners, for whom we had data on annual sales revenue, had annual sales revenue of at least $10 million. Two of the 21 SME petitioners had annual sales revenue of at least $50 million. In contrast, 3 SME petitioners had annual sales revenue of under $1 million. Close to half (17) of the 38 SME petitioners were in the iron and steel industry (see fig. 3). Similarly, close to half (54) of the 109 non-SME petitioners (i.e., those petitioners that did not meet the SBA criteria) were also in the iron and steel industry. SME petitioners in the iron and steel industry included producers of steel garment hangers, steel nails, drill pipes, and welded stainless steel pressure pipes. Other SME AD/CV duty petitioners included producers of wood products, aluminum products, and machinery, among other things. Table 1 lists the products included in the petitions filed by SME petitioners from 2007 through 2012. Producers of goods under investigation, including SMEs, can benefit from successful AD/CV duty petitions even if they are not petitioners. Producers may support the petition but choose not to join as petitioners to avoid bearing the cost, including petition preparation and representation throughout the proceeding. In some cases individual petitioners may cover all or most of the legal costs, thereby sparing co-petitioners from paying for any or much of the cost. Additionally, according to Commerce, it is possible that some non-petitioners may support the petition by assuming some of the cost. Trade remedy duties add to the price of foreign imports and can benefit domestic producers of the competing like products, regardless of whether or not they choose to be petitioners. When deciding whether to become a petitioner, a producer weighs the expected benefits against the cost, according to our literature review and interviews we conducted with experts. For a producer with a small share of the market, the expected benefits from a successful petition might be small, leading to a stronger incentive not to share the cost by becoming a petitioner. Petitioners also have to weigh the benefit against the cost of having additional producers join as petitioners. For example, more petitioners could help an industry obtain greater resources to use for advocacy and increase the likelihood of a successful petition, but having more petitioners would also require more effort and cost. It is difficult to measure the extent to which non-petitioners, including SMEs, have benefited from successful AD/CV duty petitions without knowing either how producers allocated the cost among themselves or the specific reasons why certain producers chose not to be named on a petition. Those reasons could include the fear of retaliation from internationally active producers and not wanting to share the cost, as discussed earlier. Public ITC reports do not disclose the position of the non-petitioning domestic producers, unless a non-petitioning producer has publicly disclosed that information. However, we can estimate the maximum number of producers who may have benefited from a duty associated with a successful petition without becoming a petitioner, by calculating the share of non-petitioners within the industry and assuming that all non-petitioners benefit from successful petitions without having to share the cost. As discussed earlier, ITC typically sends questionnaires to all producers (petitioners and non-petitioners) in the industry during its material injury investigation. Of the 93 SMEs surveyed by ITC for the petitions filed from 2007 through 2012, 38 were petitioners and 55 were non-petitioners. These 55 non-petitioners represent the maximum number of producers that may have benefited from the duty without having to sign on as petitioners and share the cost. Knowledgeable parties we interviewed—including Commerce and ITC officials, three academics, representatives from two industry associations, six trade lawyers, two Congressional Research Service trade experts, and two SME petitioners—identified key challenges to SMEs’ ability to pursue the imposition of AD/CV duties. The challenges most frequently cited were (1) high legal costs, (2) difficulty obtaining domestic and foreign pricing and production data, and (3) difficulty demonstrating industry support. Other challenges cited less frequently included fear of retaliation and lack of knowledge regarding AD/CV duties. Legal costs associated with pursuing an AD/CV duty case are a key challenge for SMEs. Agency officials and trade lawyers we spoke with stated that it is expensive for SMEs to pursue AD/CV duty cases because the process generally involves trade lawyers. Five of the six trade lawyers we interviewed roughly estimated that the average legal cost for pursuing an AD or CV duty case from petition through the investigation was between $1 million and $2 million, with approximately 70 to 75 percent of the cost incurred during the investigation phase. One trade lawyer stated that the average cost for pursuing a case could be considerably more than $2 million, depending on the complexity of the case or whether it involves multiple countries. According to agency officials, prospective petitioners are not required to hire a trade lawyer to file a petition, but legal representation is advantageous because trade lawyers can obtain confidential information from multiple domestic producers and foreign respondents and have the expertise to guide a case through the investigation phase. Trade lawyers or authorized representatives also have the ability to obtain confidential information collected by Commerce or ITC by entering that agency’s Administrative Protective Order (APO) system. A company would not be able to obtain this information because producers typically seek to protect their pricing and production data from disclosure to their competitors. In addition, trade lawyers or other representatives are able to provide advocacy and guidance through the often complicated proceedings before Commerce and ITC. For example, trade lawyers advocate for their clients and challenge those who oppose the case at public ITC hearings where ITC Commissioners ask witnesses numerous detailed questions to gather more information. Commerce officials were only aware of one instance where a petitioner did not hire a trade lawyer until the investigation phase. In this case, the officials said the petitioner was the sole producer in its industry. Legal costs may be incurred during each phase of an AD/CV duty investigation, as well as after the investigation is completed. Representatives from all six trade law firms we spoke with told us that the overall legal cost of pursuing an AD/CV duty varies according to the nature of the case. Factors affecting the overall cost include the number of respondent countries and companies, the complexity of the case, the number of products involved, and how much data are available. In addition to the costs associated with pursuing a case through the petition and investigation phases, additional costs may be incurred after the completion of a case during appeals and administrative reviews. For example, SMEs may hire a trade lawyer to represent them if the final Commerce or ITC determination is appealed to the U.S. Court of International Trade, or further appealed from that court to the U.S. Court of Appeals for the Federal Circuit. SMEs may also use trade lawyers to assist with any requested yearly administrative reviews of a case, which determine the final amounts of duties owed on past imports and set new duty deposit rates for future imports. Trade attorneys look at a range of trade remedy options to potentially address a trade issue, and may advise potential petitioners that seeking the imposition of an AD/CV duty is not in their best interest. Petitioners are generally responsible for paying their legal costs, and the amount each entity pays depends on the particular circumstances of the case. Two trade lawyers we spoke with stated that in some instances, petitioners may agree to allocate costs according to each petitioner’s share of production in the given industry. Another trade lawyer explained that a petitioner with higher revenue may assume most or all the costs. One SME petitioner we spoke with said that his company covered the majority of costs associated with its case because it was the largest producer in a small industry composed of relatively few other companies. In some instances, outside sources such as trade associations may cover some of the costs. For example, a trade association representative shared an example of a case where the association financed legal costs using Continued Dumping and Subsidy Offset Act distributions from a previous case. During the petition and investigation phases, it is often difficult for prospective petitioners to obtain domestic and foreign pricing and production data required by Commerce and ITC regulations and guidance. Both Commerce and ITC post guidance on their websites to help prospective petitioners understand the types of data required for a petition and the manner in which it should be presented and organized. Petitioners are required to provide general data such as their name, address, and some background information describing the extent of their involvement in the industry. In addition, they must state whether they have filed within the past 12 months, are now filing, or are planning to file for other forms of import relief involving the good in question. Several types of pricing, production, and injury data are also required for the submission of a petition, as follows: Pricing data: For a CV duty case, prospective petitioners must provide reasonably available information regarding the law, regulation, or decree under which the alleged countervailable subsidy is provided along with the value of the subsidy to the exporters or producers of the subject merchandise. For an AD duty case, prospective petitioners must provide reasonably available data relevant to the calculation of the U.S. price of the merchandise and the normal value of the foreign like product. Production data: Prospective petitioners must provide a detailed description of the imported merchandise, which should include the classification of the merchandise in the Harmonized Tariff Schedule of the United States. In addition, to the extent reasonably available to them, prospective petitioners should provide the names, addresses, and telephone numbers of the foreign producer(s) and exporter(s) believed to be selling the good at less than fair value or benefiting from a countervailable subsidy. These data must also include the volume and value of each firm’s exports of the merchandise to the United States. The same data are required of the firms believed to be importing the merchandise into the United States, to the extent that it is reasonably available. Prospective petitioners should also provide data on domestic production of the merchandise in question and information relating to the degree of industry support for the petition. Injury data: The petition should contain data to support the allegation that a domestic industry has been materially injured, or threatened with material injury, as a result of the alleged unfair imports. As a part of the injury data, each prospective petitioner should list all sales and revenues lost resulting from the alleged unfair imports during the 3 years preceding the filing of the petition. Collecting and reviewing detailed pricing and production data during the petition and investigation phases places an administrative burden on SMEs. SMEs have fewer employees than larger firms and generally lack the expertise needed to take on the additional tasks of data collection, according to agency officials. Representatives from all six law firms and agency officials we spoke with agreed that SMEs face challenges when collecting the pricing and production data for a petition because the data required are extensive and difficult to obtain. An SME petitioner explained that his company employed legal counsel because his company lacked the resources and expertise required to research and gather the data required to file a petition. The SME petitioner further explained that it is particularly difficult to collect and review domestic and foreign pricing data, which are composed of several inputs—such as electricity, water, and raw materials—whose price varies based on geographic location. One trade lawyer we spoke with hired Chinese nationals to assist with data collection for AD/CV duty cases involving China. Petitioners also face an administrative burden during AD/CV duty investigations. For example, they are required to respond to detailed ITC questionnaires that collect the trade, pricing, and financial data ITC uses in making its determination of whether a domestic industry is materially injured by reason of the imports under investigation. According to ITC officials, the questionnaire is comprehensive and takes approximately 50 hours to complete, which may place a strain on SMEs’ limited resources. Two trade lawyers noted that during the investigation phase they review data collected by Commerce from foreign respondents. For example, a trade lawyer may conduct research leading to a discovery that data reported by a foreign producer may not be accurate. In such an instance, the trade lawyer may ask Commerce to send a supplemental questionnaire to collect additional data from the foreign producer. According to Commerce officials, this information is important because Commerce uses data in the questionnaire to calculate whether there is dumping or countervailable subsidization and at what level. Trade lawyers also help petitioners determine the precise description of the imported goods, which according to Commerce, it uses to ascertain the scope of an investigation. For example, a 2010 AD order on seamless refined copper pipe and tube from China and Mexico defined the product very narrowly as “seamless circular refined copper pipes and tubes, including redraw hollows, greater than or equal to 6 inches (152.4mm) in length and measuring less than 12.130 inches (308.102 mm) (actual) in outside diameter...” The scope definition went on to define the product with even greater specificity. It can be difficult for prospective petitioners to garner sufficient support from other producers to demonstrate to Commerce that the petition will meet the statutory requirement of industry support. A petition meets this requirement if the domestic producers or workers who support the petition account for (1) at least 25 percent of the total production of the domestic like product, and (2) more than 50 percent of the production of the domestic like product produced by that portion of the industry expressing support for, or opposition to, the petition. According to four trade lawyers and agency officials, it can be difficult for SMEs in an industry with numerous producers to organize themselves in order to meet the statutory requirement for industry support. For example, SMEs in geographically dispersed industries with numerous producers—such as aquaculture and agriculture—may need to coordinate with hundreds of domestic producers to obtain the support required for their petition. SMEs may form an industry association to help them coordinate and establish support for a petition. For example, several hundred shrimp producers formed the Coalition of Gulf Shrimp Industries to file a petition on behalf of their industry. These producers would have had more difficulty undertaking the necessary steps to file a petition if they had remained an unorganized, geographically dispersed collection of individual companies, according to an SME petitioner from the coalition. Both Commerce and ITC have staff that respond to inquiries and provide information and assistance to SMEs to relieve some of the administrative challenges and costs of filing a petition. Commerce and ITC officials stated that much of the assistance they provide involves helping SMEs obtain the data needed to file a petition and reviewing draft petitions. If a petition does not establish that it has the support of domestic producers or workers accounting for more than 50 percent of total domestic production, after a petition is filed Commerce staff will send a polling questionnaire to domestic producers tailored to the product and industry in question, or rely on other information, to determine if the original support criterion is met. According to Commerce officials, in cases where the industry is dispersed, they can provide assistance to petitioners in their efforts to form a coalition. For example, after discussing options with Commerce, the numerous shrimp fishing companies and processers formed an association that enabled them to file six AD petitions in 2004, according to Commerce officials and a representative for shrimp producers. Officials from both Commerce and ITC also stated that staff are on hand to help SMEs obtain publically available data. For example, if prospective petitioners do preliminary work and data gathering with the assistance of Commerce staff in advance of hiring law firms, this may reduce legal costs, according to Commerce officials. In addition, officials from both agencies stated that they frequently review draft petitions and comment on how the petitions can be improved to ensure that they include all the required detailed data to support initiation of an AD/CV duty investigation. According to officials from both Commerce and ITC, the assistance they provide during pre-petition counseling can help reduce the amount of time that trade lawyers would otherwise bill to the client. U.S. law authorizes Commerce to initiate an AD/CV duty investigation without a petition, but according to Commerce officials, the department reserves the use of this authority to special circumstances consistent with international trade agreements. Commerce has used this authority once since 1991, under special circumstances. In 1991, when Canada unilaterally terminated a 1986 trade agreement with the United States, Commerce self-initiated a softwood lumber investigation. The United States and Canada had entered into an agreement in 1986 regarding the importation of softwood lumber that required the U.S. industry to withdraw its CV duty petition and Commerce to terminate its ongoing CV duty investigation. According to Commerce officials, because the initiation of the softwood lumber case followed a bilateral dispute between the two governments, it is an example of how Commerce applies special circumstances as criteria for using its self-initiation authority. Because self-initiation opens an investigation without a petition, it could reduce some initial costs to SMEs but could also have adverse effects, including raising questions of whether the action was taken consistent with U.S. obligations under international trade agreements. Opening an investigation without a petition could reduce the costs that SMEs incur during the petition phase, but would likely have little impact on overall costs because most legal costs are incurred during the investigation phase. For example, one trade lawyer and an official representing a coalition of SMEs suggested that self-initiation could lead to decreased legal costs because less time would be billed and lawyers’ involvement could start at the investigation phase. However, according to Commerce officials, changing the department’s practice to permit increased use of its self-initiation authority could be vulnerable in U.S. courts. In addition, Commerce officials stated that the limited use of self-initiation is consistent with language in the World Trade Organization (WTO) Antidumping Agreement, the Subsidies and Countervailing Measures Agreement, and the General Agreement on Tariffs and Trade, which limits the ability to self-initiate investigations to instances in which there are “special circumstances.” According to Commerce, the department has limited resources to self-initiate investigations, and self-initiation without significant participation of the industry is unlikely to result in the imposition of duties. Finally, Commerce noted that when it initiates an investigation based on a petition or by self-initiation, its decision is based on information available to it that indicates that a formal investigation is warranted. Therefore, when Commerce initiates an investigation by petition or by self-initiation, it needs the cooperation of the affected industry to help gather information that is generally the same as that required in a petition. According to Commerce officials, the data needed to show that a domestic industry is experiencing injury as a result of dumping or subsidization are most readily available to that same industry. Therefore, Commerce would need significant cooperation and data from domestic producers to meet the requirements to initiate an investigation. Commerce officials also stated that the United States tries to serve as a role model for other WTO signatory countries, so any increased use of self-initiation could lead to additional adverse effects. For example, other countries might open investigations without the data supporting allegations of unfair trade practices, which are normally included in a petition. In addition, both Commerce and ITC officials expressed concerns that without support and direct participation from domestic producers affected by unfair trade practices, it would be difficult for ITC to obtain the detailed, company-specific information in the 45 days available to it to make a preliminary determination. According to ITC officials, the questionnaires they send to domestic producers to obtain the data that support allegations of material injury are based on product definitions usually included in the petitions. Therefore, if an investigation is initiated without a petition, ITC would lack key information it needs to develop its questionnaires. U.S. AD/CV duty laws implement U.S. international obligations under the World Trade Organization (WTO). If the United States imposes AD/CV duties on a product, a foreign government may institute dispute resolution proceedings against the United States pursuant to the WTO Understanding on Dispute Settlement if it believes that the United States has violated its obligations under the WTO agreements. Commerce and ITC require detailed information to generate sufficient evidence to substantiate a case. High legal costs, difficulty obtaining pricing and production data, and garnering industry support may prove too much of a challenge for many SMEs to overcome. However, these challenges are part of a process designed to ensure that the imposition of AD/CV duties on foreign exports is backed by sufficient evidence of unfair trade practices and is consistent with U.S. law and internationally agreed-upon standards. Whether or not a U.S. industry ultimately files an AD/CV duty petition is a complex decision made after considering the resources required for the petition and investigation process, whether there is sufficient industry support, and the probable outcome. While both Commerce and ITC provide some assistance to SMEs, in the absence of additional public resources to help SMEs address the challenges of high legal costs and difficulty obtaining pricing and production data, limited options exist to address challenges to pursuing the imposition of AD/CV duties. While increased use of Commerce’s authority to self-initiate AD/CV duty investigations could lower some initial costs, its impact would be limited and could strain resources and have other adverse effects, such as foreign governments initiating investigations without data to support allegations of unfair trade practices. We provided a draft of this report to the International Trade Commission (ITC) and the Department of Commerce (Commerce) and requested comments, but none were provided. ITC and Commerce both provided technical edits that were incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Department of Commerce, the International Trade Commission, the Office of the United States Trade Representative, and the United States Small Business Administration. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Lawrance Evans at (202) 512-4802 or evansl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Safeguard laws give domestic producers relief from surges of imported goods. The principal safeguard laws that the International Trade Commission (ITC) administers include the global safeguards law, the China safeguards law, and various safeguards laws implementing free trade agreements to which the United States is a party, according to ITC officials. Most ITC safeguard investigations are conducted on the basis of a petition filed by a representative of a domestic industry. However, ITC may be required to conduct investigations at the request of the President or U.S. Trade Representative, or upon resolution of the House Committee on Ways and Means or the Senate Committee on Finance. Safeguard laws require action by the President to put relief into effect. In contrast to the antidumping (AD) and countervailing (CV) duty laws, safeguard laws do not require the finding of an unfair trade practice. Instead, ITC must first find that increased imports are a substantial cause of serious injury (or threat thereof) to a domestic industry producing an article like or directly competitive with the imported article. If ITC makes an affirmative injury determination, it recommends a remedy to the President. The President makes the final decision on whether to apply a remedy measure, and if so, the type, amount, and duration of the remedy. The President may accept or modify ITC’s remedy recommendation, or may elect not to impose a safeguard. According to ITC and Department of Commerce (Commerce) officials, trade experts, and trade lawyers, requests for safeguard investigations have been far less frequent than for AD/CV duty investigations in recent years because of the political uncertainty of whether relief will be granted. Since 2001, safeguard measures have been imposed twice. In 2002, safeguard measures were imposed on imports of certain steel products under the global safeguards provisions following an affirmative ITC injury determination and remedy recommendations in response to a request for investigation made by the U.S. Trade Representative and a subsequent resolution by the Senate Committee on Finance. These measures were in effect from March 2002 until December 2003, when they were terminated by the President following an adverse report by the World Trade Organization Dispute Settlement Body. In 2009 the President imposed higher tariffs on imports of certain passenger vehicle and light truck tires from China under the China safeguard provision following an affirmative ITC injury determination and remedy recommendation in response to a petition from a labor union representing U.S. workers producing tires. The safeguard measure on tires from China was in effect between September 2009 and September 2012. China challenged the safeguard measure before the World Trade Organization Dispute Settlement Body, which upheld the measure. Table 2 below shows the final determinations of both ITC and the President for all safeguard cases investigated by ITC from 2000 through 2012. Our objectives were to examine (1) the extent to which small and medium-sized enterprises (SME) have petitioned for the imposition of antidumping (AD) and countervailing (CV) duties, (2) key challenges to SMEs’ ability to pursue the imposition of AD/CV duties, and (3) assistance provided by Commerce and ITC to help SMEs address these challenges. To examine the extent to which SMEs have petitioned for the imposition of AD/CV duties, we obtained data from ITC public reports on a total of 406 companies. The ITC data included certain characteristics of petitioners and non-petitioners, which ITC identified as part of the industry, for AD/CV duty petitions filed from 2007 through 2012. These data contained the names and locations of the companies, and the products in question in the petitions. To identify which companies were SMEs, we used a combination of both LexisNexis and Dun & Bradstreet databases to search for the number of employees and affiliation information. Our method for collecting and assessing the information on company employment size and their affiliation to determine whether a company was an SME was as follows: We first searched the LexisNexis database for the number of employees and affiliation information of each of the 406 companies. If LexisNexis did not contain the company or the number of employees, we then searched Dun & Bradstreet. We were able to find the number of employees for 386 of the 406 companies. Based on the information obtained from these two databases, we determined whether the company was an SME using the following criteria: (1) the Small Business Administration (SBA) Office of Advocacy’s definition of SME based on the number of employees (i.e. fewer than 500 employees), and (2) the company affiliation information. In other words, we designated companies that were not subsidiaries of a larger company and had fewer than 500 employees as SMEs. If we were unable to determine whether a company was a subsidiary, we did not designate it as an SME. This methodology reflects the overall conservative approach we developed to avoid over-counting the number of SMEs. We identified 46 SME petitioners at the end of this step. As a check on the reliability of the data, for the 46 SME petitioners we identified in the prior step, we conducted a second search in Dun & Bradstreet for the number of employees and affiliation information. For the companies where the number of employees differed, we used the larger employment number to make the final determination as to whether a company was an SME. After this check, we concluded that 38 petitioners were SMEs. Information on the number of employees for 7 of the 38 SME petitioners came from one database, which we determined to be adequate because there was a high level of correspondence between the two databases. For the 42 companies for which we had employment numbers from both data sources, the employment numbers were largely consistent. There were only two companies for which one source showed under 500 employees and the other showed 500 and over. To assess the characteristics of the SME petitioners, we analyzed the annual sales revenue and the industry distribution of the 38 SME petitioners. We collected sales revenue data from LexisNexis and Dun & Bradstreet on SME petitioners to determine whether they had annual revenues of $10 million or more and, whether they had annual revenues of $50 million or more, or less than $1 million. Our method for collecting and assessing the information on company sales revenue was as follows: We first used LexisNexis and Dun & Bradstreet to find the annual sales revenue for the 38 SME petitioners and were able to find it for 37 companies. Annual sales revenue data for 30 of the 37 companies were in both databases, for 7 in either LexisNexis or Dun & Bradstreet, for 33 in LexisNexis, and for 34 in Dun & Bradstreet. For the 30 companies for which we had revenues from both sources, if both the Lexis and Dun & Bradstreet values fell into the same category, we assigned the company to that category. For the category $10 million and above, 19 companies fell in the same category according to both databases. For the category $50 million and above, 2 companies fell in the same category according to both databases. For the category below $1 million, 1 company fell in the same category according to both databases. This way of counting the number of companies reflects the overall conservative approach we developed to avoid over-counting the number of SMEs—in this case, those with sales revenues of $10 million and above or $50 million and above. As a check on the reliability of the revenue data, we compared the categorization of whether the company had sales revenues of at least $10 million for the 30 companies for which we had data from both sources. Overall, the level of correspondence in this categorization was 25 out of 30 companies—i.e., the two sources showed 5 companies belonging to different categories and 25 belonging to the same category. For the 7 companies for which we had annual sales revenue data from only one database, we used the value obtained from that source. We determined this to be a valid decision based on the relatively high level of concurrence between the two data sources when assessing SME sales revenue. We found 2 companies with annual sales revenue of $10 million and above, 2 companies with annual sales revenue of less than $1 million, and no company with annual sales revenue of $50 million or above. We then summed up the number of companies in each category based on the counts we obtained in the two steps described above. We assessed the reliability of the ITC data on petitions filed from 2007 through 2012 by interviewing agency officials who were knowledgeable about the data. We assessed the reliability of the information obtained from LexisNexis and Dun & Bradstreet by reviewing ITC data to ensure that the company names and locations were consistent. We also reviewed existing information about the databases. When we found inconsistencies between the two databases, we applied a methodology as described above to ensure that we were conservative in our count of SMEs and their revenues. On the basis of these steps we determined that the data were sufficiently reliable for our purposes. To identify key challenges to SMEs’ ability to pursue the imposition of AD/CV duties, we interviewed Commerce officials in the Import Administration Office, Petition Counseling and Analysis Unit, and Office of General Counsel and ITC officials in the Trade Remedy Assistance Office, Office of the Inspector General, and Office of Investigations. In addition, we interviewed three academics, representatives from two industry associations, six trade lawyers, two Congressional Research Service trade experts, and two SME petitioners. We selected the academics, industry association representatives, and trade lawyers on the basis of recommendations from CRS trade experts and Commerce and ITC officials. We selected a sample of SMEs to interview based on a range of different products represented on petitions. We contacted 18 SMEs who filed petitions, but only 2 volunteered to participate in our interview. The trade lawyers we spoke with represented 38 percent of the 21 petitions with SMEs filed from 2007 through 2012. We administered a set of standard questions to all six trade lawyers we interviewed. To obtain information on legal costs, we asked representatives from each of the six trade law firms for a range of the approximate costs of pursuing AD/CV duties. One declined to respond and the remaining five offered estimates rather than examples of actual fees charged to clients. To identify the data requirements for filing a petition, we reviewed relevant requirements and guidance, including ITC’s 2008 Antidumping and Countervailing Duty Handbook and applicable statutes and federal code. We also reviewed a sample of petitions submitted by prospective petitioners and ITC reports. To examine assistance provided by Commerce and ITC to help SMEs address these challenges, we interviewed the same parties as for the prior objective. To obtain information on self-initiation, we reviewed applicable U.S. statutes and international agreements. We analyzed Commerce documents to determine the extent to which self-initiation had been used recently. In addition, we gave Commerce a set of written questions regarding increased use of self-initiation and we examined the department’s written responses. Afterwards, we discussed the issue further with trade lawyers, and with Commerce and ITC officials. We conducted this performance audit from June 2012 to June 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Christine Broderick (Assistant Director), Tom Zingale, Ming Chen, and Heather Hampton made key contributions to this report. Vida Awumey, Debbie Chung, David Dornisch, Etana Finkler, Alfredo Gomez, Grace Lui, and Erin McLaughlin provided additional assistance. | The United States and many of its trading partners have enacted laws to remedy the unfair trade practices of other countries and foreign companies that cause or threaten to cause material injury to domestic producers and workers. U.S. laws authorize the imposition of AD duties on certain imports that were dumped (i.e., sold at less than fair market value) and CV duties on certain imports subsidized by foreign governments. Commerce and ITC conduct AD/CV duty investigations, most of which are initiated based on petitions filed on behalf of a domestic industry. According to the U.S. Census, in 2010, small and medium-sized enterprises accounted for 45 percent of employment in the manufacturing sector. GAO was asked to review SMEs pursuit of trade remedies. This report examines (1) the extent to which SMEs have petitioned for the imposition of AD/CV duties, (2) key challenges to SMEs ability to pursue the imposition of AD/CV duties, and (3) assistance provided by Commerce and ITC to help SMEs address these challenges. GAO examined petition data from ITC and interviewed petitioners, trade lawyers, trade association officials, academics, trade experts from the Congressional Research Service, and Commerce and ITC officials. In addition, GAO reviewed AD/CV duty petitions and reports. Some small and medium-sized enterprises (SME)--which are defined by the Small Business Administration's Office of Advocacy as independent businesses with fewer than 500 employees--have petitioned for the imposition of antidumping (AD) and countervailing (CV) duties to seek relief from unfair trade practices. Among the 56 petitions filed between 2007 and 2012, GAO found 21 that included at least 1 SME petitioner. In addition, the 56 petitions represented a total of 147 petitioners, of which 38 were SMEs. The majority of these SME petitioners had annual sales revenue of at least $10 million. Close to half of the total SME petitioners were in the iron and steel industry. Since participation in the petitions is not mandatory, producers, including SMEs, may benefit from a successful petition even if they choose not to join as a petitioner. SMEs face three key challenges when pursuing the imposition of AD/CV duties: (1) high legal costs, (2) difficulty obtaining domestic and foreign pricing and production data, and (3) difficulty demonstrating industry support. Trade lawyers estimated that the cost of pursuing an AD or CV case during the petition and investigation phases can average between $1 million and $2 million and sometimes more, especially if the case involves multiple countries. It is often difficult for prospective petitioners to obtain domestic and foreign pricing and production data required by Department of Commerce (Commerce) and International Trade Commission (ITC) regulations and guidance. In addition, it can be difficult for prospective petitioners to demonstrate enough industry support to meet statutory requirements. Commerce and ITC both have offices that provide information and assistance to SMEs to help them meet some of the administrative requirements and reduce costs. Commerce has the authority to self-initiate an AD/CV duty investigation without a petition and has used this authority only once since 1991. According to Commerce officials, the Department uses this authority only when it has significant participation from the industry. Self-initiation would likely have little impact on SMEs' overall costs since SMEs incur most costs during the investigation phase. Also, self-initiation could have adverse effects, including raising questions of whether the action was taken consistent with U.S. obligations under international trade agreements. GAO is not making any recommendations. |
Freight shipped by rail travels over an extensive network that consists of 140,000 route-miles across the United States. Freight railroads are generally privately owned and rely on their revenues to invest in maintenance and operations that support safe and efficient transportation services. During the last 40 years, the freight railroad industry has consolidated. Currently, the U.S. railroad industry includes seven major Class I freight railroads. Railroads carry a variety of commodities and may compete with each other and other shipping modes such as trucks and barges for business. The Interstate Commerce Commission (ICC) was established in 1887, originally to regulate almost all of the rates that railroads charged shippers to ensure that the rates were reasonable and just. The Railroad Revitalization and Regulatory Reform act of 1976 and the Staggers Rail Act of 1980 were enacted in response to economic slowdown in the 1970s, with rising costs, losses in traffic and revenues to motor carriers, and bankruptcies affecting the railroad industry. These laws substantially reduced federal regulation and encouraged greater reliance on competition to set rates. In particular, the Staggers Rail Act gave railroads increased freedom to price their services according to market conditions, including the freedom to use differential pricing, a practice in which railroads can charge higher rates to shippers of commodities such as coal and chemicals, which are more dependent on the rail network. The ICC Termination Act of 1995 transferred the ICC’s regulatory functions to the Surface Transportation Board (STB), an independent adjudicatory body. The STB now serves as the industry’s economic regulator, resolving rate and service disputes between shippers and railroads for regulated goods shipped under tariff. The acts also allowed railroads and shippers to enter into confidential contracts to set terms and rates. STB has no authority to review the terms or rates for freight shipped by contract. To protect shippers served by only one railroad with no competitive shipping alternatives—known as “captive” shippers—from unreasonably high rates, STB established a process in which shippers that transport their freight by tariff could potentially challenge the reasonableness of a rail rate and seek financial relief from the railroads, a process that we refer to as a “rate-relief” process. Some commodities were later exempted from STB’s jurisdiction, primarily commodities that could be shipped by boxcar or intermodal containers, in part because these goods can also be transported by other competitive alternatives such as barge or truck and are therefore unlikely to be captive. According to a Department of Agriculture report, these exemptions, in addition to contracts, effectively freed about 75 to 85 percent of freight traffic from economic regulation by STB. Currently, the most frequently transported commodities that could be subject to STB rate regulation, by tons shipped, are agricultural, specifically, grain soybeans, and sunflower seeds; assorted food items; coal; chemicals; and nonmetallic minerals. Railroads are required, upon request, to offer tariff terms and rates to shippers, but railroads are not required to offer a contract. Contracts are confidential, mutually agreed upon, and may contain rates for specific routes and other service terms for a specific shipper. According to the selected railroads and shippers we spoke to, contracts generally contain multiple, shipper-specific O-D routes—agreements on terms and rates for specific shipments over specific shipping routes—because a shipper may have multiple routes. For example, according to a Class I railroad representative, a chemical production facility may receive its raw materials from multiple origins, with each route having agreed upon terms and rates, as shown in figure 1. STB has the authority to review the reasonableness of rates and service terms for regulated commodities if shipped by tariff. Tariffs are a pricing document issued by the railroads showing rates that are usually not customer-specific. Tariffs also spell out the standard terms of the railroad. These standard terms differ by commodity, covering everything from loading specifications to billing procedures. Under STB’s rate-relief process, in which it resolves disputes regarding the terms or rates in a tariff, in order for STB to review a rate as potentially unreasonable, the rate must not be under contract and the commodity must not be exempt from STB rate regulation. STB may then consider the reasonableness of a rate only if it also finds that the railroad has market dominance over the shipment at issue—that is, if (1) the rate is equal to or exceeds 180 percent of the railroad’s variable costs for providing specific services to the shipper and (2) the railroad does not face effective competition from other railroads or other modes of transportation. If STB decides the rate is unreasonable, it can order the railroad to pay reparations to the shipper for past shipments and decide the maximum rate the railroad is permitted to charge for future shipments. Under its authority, STB considers the reasonableness of a challenged rate using one of its three tests, as chosen by the shipper. There is a financial limit imposed on the relief available depending on the test chosen: Stand-Alone Cost (SAC): The most commonly used test is the Stand- Alone Cost test, which requires the shipper to design a hypothetical railroad, tailored to serve the specific route(s), to simulate the competitive rate that would exist in a perfectly efficient network. STB then compares the challenged rate to the hypothetical rate. During the rate relief process, both the railroad and the shipper have the opportunity to present their views to STB. Simplified SAC: The simplified SAC seeks to create a cost-effective alternative to the SAC test. The simplified test eliminates or restricts the evidence parties can submit to the actual operations and services provided by the railroad. Three Benchmark: The Three Benchmark test is faster and less rigorous, but limits the potential return for a successful rate challenge. STB determines the reasonableness of a challenged rate by examining three benchmarks, or tests, that assess rate markups. Between 1996 and 2016, STB reviewed 50 rate reasonableness cases. Of these cases, 36 used the SAC rate case process, 5 used the simplified SAC, 5 used the three benchmark process, and 4 used a different methodology. To date, most of the 50 STB rate cases have been for coal (32) or chemical (16) shippers. Among the 50 cases brought before STB since 1996, about half (26) were settled without an STB decision, while those that were decided by STB were split fairly evenly in favor of the shippers (11) or the railroads (10). According to representatives of the four Class I railroads we spoke to, while contracts’ and tariffs’ terms and rates are developed using similar methodologies, they may contain key differences. For example, selected stakeholders said that contracts are often customized to a specific shipper and may reflect the railroad’s and shipper’s preferences for a given route or shipment. Figure 2 describes the similarities and differences between contracts and tariffs, based on our interviews with representatives from the four largest Class I railroads and selected shippers, pertinent laws, as well as sample contract language and tariffs provided by the railroads. Railroad representatives said they rely on publicly available standard terms to govern shipments over their network, regardless of whether the shipments are moving under contract or tariff. These terms are unique to each railroad and potentially negotiable for shipments under contract—as are all aspects of a contract. A railroad’s standard terms cover all aspects of a shipment, from loading specifications to billing procedures, and may be contained in multiple documents. As a result, according to one railroad representative, contracts typically incorporate or reference all of the railroad’s applicable standard terms unless certain terms are specifically negotiated by the shipper. Sample contract language provided by three Class I railroads show contracts may also include negotiable terms not typically found in tariffs, such as volume commitments, discounts, and service standards. However, according to two shipping associations representing coal shippers, in recent years, contracts have become more standardized and often reference other governing documents that spell out additional rules and conditions. More specifically, a representative from one of these shipper associations said that some contract features have become more difficult to negotiate, in part, because of the increased standardization. For example, they told us most contract negotiations now are generally limited to rates, fuel surcharges, train size, loading and unloading timeframes, and volume commitments. According to three of the selected Class I railroads and a representative from a shipper association, to start a contract negotiation, typically a shipper may issue a request for proposal outlining its needs or informally contact the railroad. Railroad representatives said the railroad will typically respond with a standard contract. One railroad representative told us the railroad may administer thousands of contracts at any given time, so maintaining standard terms across their contracts allows the railroad to manage its business portfolio and operations more efficiently. For example, a representative from another Class I railroad said its standard terms allow it to efficiently handle different shippers’ cars on the same train. According to representatives from the four Class I railroads we interviewed, they individually determine contract and tariff rates based on a number of similar market factors associated with the supply and demand for the commodity the shipper plans to transport. Because railroads must offer a tariff rate upon request, railroad representatives said they consider more general market factors in determining tariff rates. Selected shippers told us that tariff rates are usually, but not always, higher than contract rates, and sometimes are the same, in part, because contract rates, according to a railroad representative, can be negotiated and reflect market factors specific to the shipper as well as discounts for volume commitments. In our discussions with the four Class I railroads, they said they look at the extent of competition when determining contract and tariff rates. According to one railroad representative, this includes considering customer feedback, past experience, and market research to determine the level of three types of competition: Competition from other forms of transportation: Generally, a railroad can charge more if it does not have to compete with another railroad or other transportation forms, such as barge, pipeline, or truck. As a result, a shipper served by more than one railroad or competitive alternative can create more leverage during contract negotiations. For example, representatives of the selected railroads told us if they are competing with the trucking industry for business, they may offer lower rates to obtain the shipper’s business. Competition from other freight being shipped to the same market: A chemical shipper, as part of its comments to STB’s 2011 competition proceeding, stated that rail rates may impact a shipper’s ability to compete nationally and globally. According to railroad representatives we spoke to, they examine commodity markets to ensure they are pricing rates to allow shippers to be competitive. For example, a railroad representative told us it examines the rates it charges domestic crude oil shippers to ensure it remains competitive with imported crude oil. Representatives from the railroad industry also said understanding the extent of geographic competition within a market ensures the railroad maintains volume over their network. If rates are priced too high, a shipper may lose business and transport less freight. However, according to coal and grain shippers we interviewed, tariff rates may not always reflect current commodity market conditions. Competition through substitution: Some shippers may be able to obtain a commodity it needs from a different location, or it may use an alternative commodity. For example, a railroad told us a power plant may choose to obtain its coal from another mine or switch from coal to natural gas if prices make doing so advantageous and the plant is able and configured to do so. Contract and tariff rates also reflect the characteristics of a given shipment. More specifically, representatives from the four Class I railroads we interviewed said the rates reflect the characteristics of the commodity being shipped and its particular origin and destination. For example, a shipper association representing chemical shippers and a railroad representative said that commodities such as toxic inhalation hazards are more expensive to ship under contract and tariff because of their hazardous nature and liability concerns. In addition, railroad representatives also told us rates may vary by distance, and longer routes may be more expensive. Railroads rely on selling freight transportation services at a particular rate to recover their operating and infrastructure costs. However, while representatives from all four Class I railroads we spoke to told us they do not develop their rates to provide transportation based on these costs, they also said they will not typically set rates below the cost of providing rail transportation. According to a Department of Agriculture report, the costs associated with providing rail transportation for each shipment, such as maintenance and rail crew costs, serve only as a floor below which rates should not go and bears little relationship to individual rail rates, which are closer to what the shippers are willing and able to pay. According to a representative from one Class I railroad, competitive factors may sometimes require the railroad to charge less than its costs to ship. However, railroads can also charge higher rates where its network is highly valuable to its shippers. Selected shippers and railroad representatives also said contracts generally provide key advantages by allowing for increased financial and logistical certainty for both parties. According to selected shippers we spoke to, they prefer to maintain the minimal number of contracts to meet their transportation needs. A shipper association said shippers can more efficiently manage multiple routes under one contract because of the stability in rates over the duration of the contract. As a result, according to the railroad representatives, shippers often request contracts covering multiple routes—potentially up to thousands of routes, according to one railroad representative—because managing each route under a separate contract, or by tariff, would be too complex to administer. For example, one chemical shipper told us it may move one or two carloads per day across 1,000 separate routes over a year. In addition, contracts allow shippers and railroads to customize terms to better fit their shipping needs, such as offering guaranteed pickup and delivery times or specific services. For example, according to one railroad representative, in a contract with multiple routes, a railroad may offer a lower rate on some routes, while increasing rates on others, to allow a shipper to break into a new market. Another railroad representative said the railroad can also provide additional logistics services at a lower cost within a contract that contains multiple routes because the economy of scale offered by a large contract allows the railroad to provide services at a lower rate due to gains in efficiency. Furthermore, according to representatives from three Class I railroads, a contract’s overall volume commitment and certainty helps the railroads better plan future investments. Specifically, one railroad representative said a contract allows the railroad to allocate locomotives and rail crews more efficiently and ensure a consistent source of revenue. As a result, contracts generally include agreements on the amount of volume a shipper is willing to commit. Railroad representatives and selected shippers told us that the guaranteed volume from a contract may also create additional leverage for the shipper during contract negotiations. To gain additional volume guarantees, railroads and shippers said railroads typically offer discounted rates to shippers in exchange for volume commitments. Railroad representatives also said these discounts may be for one, multiple, or all of the routes in contract, and they said the more volume a shipper is willing to commit, the better deal it can expect to receive. However, if the shippers fail to meet their expected volume commitment, they may be subject to financial penalties. Moreover, according to one railroad representative, higher-volume shippers are also more likely to request custom terms. However, some selected shippers said tariffs may provide advantages in certain situations. For example, they said contract negotiations may be too long and costly for shippers with infrequent or small volume shipments. Furthermore, according to coal shippers, in some instances, the tariff rate may be the same as the contract rate, and shippers incur potential penalties associated with failing to meet contractual volume commitments. Contract usage across the most frequently transported commodities regulated by STB has generally increased or stayed relatively the same in recent years. Specifically, from 2005 through 2014, the ton-miles shipped under contract for these commodities have increased by 6 percent, from about 705,000 ton-miles to about 800,000 ton-miles, a measurement that combines weight and mileage. In 2014, about 76 percent of regulated freight was shipped by contract. However, according to railroad representatives, each railroad’s business differs. Figure 3 shows the percent of selected commodities with regulated rates by ton-miles shipped under contract from 2005 through 2014. From 2005 to 2014, the percentage of coal and chemicals shipped under contract has increased when measured on a ton-mile basis. Specifically, in 2005, 55 percent of all chemical shipments, measured in ton-miles, were shipped by contract; by 2014, that percentage had increased to 85 percent. Similarly, over the same time period, the ton-miles of coal shipped increased from 86 percent to 94 percent. A representative of a shipper association said chemical shipments under contract increased, in part, because the potential for improved negotiated terms may provide additional certainty and lower rates. Both coal and chemicals are also heavily dependent on the railroad’s network. We were unable to obtain contract duration data from selected railroads and shippers, but selected coal and grain shippers as well as a railroad representative told us that the duration of contracts has generally decreased during the last 10 years. According to another rail representative, contract duration depends on the commodity and the market; for example, this official said shippers in markets that change frequently may prefer shorter contracts. Additionally, representatives from two selected shipper associations that represent various commodities told us the duration of contracts has decreased over the years, with longer contracts becoming increasingly uncommon, though they prefer longer contracts because of their rate stability. A coal-shipper association representative also told us current coal contracts are typically for about 3 to 5 years, in part because a shorter contract would result in more frequent contract renegotiation. Another coal shipper association representative said the duration of contracts to transport coal has gotten shorter, mostly because of the increased uncertainty in the energy market. In the past, we have reported that the duration of contracts has declined, in part, because of the railroads’ desire to quickly react to shifting market demand. One railroad representative we spoke to said as markets become more dynamic, shippers can continue to expect to receive shorter contracts. In addition, selected shippers told us the railroads may also be shifting certain costs to shippers. More specifically, according to a grain shipper, railroad service commitments that used to be common are no longer included in contracts. Further, one coal shipper said the railroad previously supplied its own personnel to load the coal, but now, with no reduction in rates, the shipper is required to do so. Despite the volume discounts and other advantages to contracts with multiple O-D routes, some shippers said that contracts effectively constrain them into paying higher rates on some routes, because railroads will propose contracts with multiple O-D routes where some of the routes are priced unreasonably high, in the shippers’ view. This is particularly an issue for shippers that are “captive”—that is, shippers served by a single railroad without an economically viable transportation alternative because a trucking or barge route either does not exist or would be too costly. Figure 4 below illustrates the ways in which a shipper can be captive on all routes or just some routes. Eight of the nine shippers we interviewed stated that captive shippers have no other options than the one railroad that serves them, which can result in higher freight rail rates. Further, according to our analysis of a 2011 STB proceeding on rail competition, 6 of the 11 shippers that provided comments to the proceeding about being a captive shipper said that captive shippers have no other options than the railroad that serves them, which can result in higher rates. Additionally, a 2010 study commissioned by the STB concluded that captive shippers tend to pay higher shipping rates than otherwise similar shippers with access to additional railroad or water competition. As previously discussed, the selected railroads we interviewed said they develop prices based on market forces and the railroad’s supply of available equipment and rail lines. This is the basis for differential pricing, a pricing strategy where railroads charge shippers with few or no other options more than shippers with more options for their freight. Differential pricing is permitted and encouraged in the rail freight market by design. The Staggers Rail Act of 1980, which reduced the economic regulation of railroads, provided that rail rates should be set by the competitive market forces to the maximum extent possible. According to railroad officials, railroads make up for lower revenues from highly competitive routes by charging higher rates in less competitive routes where they have market dominance. All four railroads and the Association of American Railroads also said that railroads use differential pricing to charge the highest rate shippers are willing to pay so railroads can cover infrastructure costs. Although shippers have the option of challenging a tariff rate before the STB, they do not have this option for challenging rates they view as unfair if agreed to in a contract, since the STB has authority to review tariff rates but not contract rates. Although contracts are negotiated between railroads and shippers, some shippers told us that because contracts often contain rates for multiple routes, they may be pressured to accept higher rates they view as unfair as part of the package of rates they agree to, particularly if some of the shipper’s routes are captive. Specifically, three selected shippers said this situation can arise when faced with a contract containing multiple O-D routes where they view some routes as priced too high by the railroads. In this situation, they have two choices: 1) accept the contract and pay higher rates for some routes, or 2) reject the contract and opt instead to move their freight by tariff, which could result in higher prices for all the routes since, as previously discussed, shippers and railroads said that tariff rates are generally higher than contract rates. According to two of the shippers we interviewed, combining captive and competitive routes together in one large contract can create high rates on captive routes. In contrast, officials from one railroad said that the STB tariff rate-relief process prevents railroads from forcing shippers to pay unreasonable rates, even in contracts. The railroad representatives said this occurs because railroads do not want to have a rail rate case before the STB and a shipper that thought a contract rate was unreasonable could always ask for the tariff in place of the contract to be able to file a rate case. When an STB rate case is an option, six of the interviewed shippers said that STB tariff rate-relief cases are complicated, time consuming, and expensive, in part, due to the challenges in determining reasonable and unreasonable rates. Consequently, these shippers said they are deterred from pursuing cases or from requesting tariff rates from railroads in order to pursue a rate case. An STB staff member said that when a rate case involves hundreds of O-D pairs, resolving the case can take substantial resources and time for STB, the railroads, and the shippers involved in the litigation. This STB staff member also stated that cases can take up to 3 years and be so costly that some shippers may think it is not worth bringing a case. The STB rate-relief process was designed, in part, to maintain reasonable rates for captive shippers. However, determining reasonable versus unreasonable rates can be challenging given the market forces involved. Once a rate case is filed, STB must determine whether the rate is reasonable because it allows a railroad to earn adequate revenue for its fixed costs, or whether the rate is unreasonable because it allows the railroad to earn more than adequate revenue from its market dominance. According to economic literature, differential pricing allows railroads to collect adequate revenue to cover all costs and earn a reasonable return on their investments. Railroads not earning adequate revenue to remain in business and to adapt their network to meet future shipper demands would be problematic for both railroads and the shippers that rely on them. However, some research shows that railroads may be recovering from the rising costs, losses, and bankruptcies in the 1970s. For example, according to two economic studies of railroad economics, the Class I railroads may now be earning adequate returns on investment, and perhaps sometimes in excess of adequate returns, so measures to reduce the amount contributed by captive shippers to railroad returns may be appropriate. More recently, in September 2016, STB determined that four Class I railroads were revenue-adequate for the year 2015, specifically that these railroads achieved a rate of return equal to or greater than STB’s calculation of the average cost of capital to the freight rail industry. STB is currently reviewing its rate relief process as required in the Surface Transportation Board Reauthorization Act of 2015. In June 2016, STB released an Advance Notice of Proposed Rulemaking outlining measures to expedite its handling of SAC rate cases. Comments were due in August 2016. STB staff said that measures such as standardizing evidence submissions would expedite SAC cases for purposes of fairness to litigants and improving overall agency efficiency. They also said that since the current proceeding on expediting SAC rate cases had just begun, it was too soon to know how changes might affect the SAC process. We provided a draft of this product to STB for comment prior to finalizing this report. We received technical comments from STB which we incorporated as appropriate. We will send copies of this report to appropriate congressional committees. In addition, we will make copies available to others upon request, and the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or Flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. In addition to the individual named above, other key contributors to this report were Andrew Huddleston, Assistant Director; Sarah R. Jones, Analyst-in-Charge; Namita Bhatia Sabharwal; Herbert J. Bowsher; Stephen M Brown; Russell C. Burnett; Ross A. Gauthier; Richard A. Jorgenson; Grant M. Mallie; Cheryl M. Peterson; Malika Rice; Kelly L. Rubin; Amelia (Michelle) Weathers; Alwynne Wilbur; and William T. Woods. | The nation's freight rail network is vital to the economy, moving about 40 percent of U.S. freight and generating over $73 billion in revenue in 2013. Railroads charge various rates for moving freight from a particular origin to a particular destination. A rate may be set by the railroad in a public pricing document—known as a tariff—or negotiated through a private contract with a shipper. While most freight ships under contract, some shippers have raised concerns with how railroads negotiate contracts that contain multiple origin-to-destination routes. Though shippers that use rail to transport freight under tariff may seek relief from STB for rates they view as unreasonable, STB has authority to review tariff rates, but not contract rates. The Surface Transportation Board Reauthorization Act of 2015 included a provision for GAO to review rail transportation contract proposals containing multiple origin-to-destination routes. This report addresses (1) similarities and differences in shipping freight under a tariff versus a contract, and the potential benefits to using each, and (2) views of selected stakeholders on the implications of shipping freight under a tariff versus a contract. GAO analyzed STB data from 2005 to 2014, reviewed documents provided by and interviewed representatives of the four largest freight railroads, the Association of American Railroads, STB officials, and representatives of nine shippers selected to represent a mix of commodities transported by rail. While rail contracts and tariffs are similar, contracts offer the flexibility to customize rates and terms to a specific shipper, according to selected stakeholders GAO interviewed. Both contract and tariff rates are based on market factors, such as competition, according to representatives from the four largest U.S. freight railroads. However, they noted that in developing contract rates, a railroad will also examine factors specific to each shipper and may negotiate discounts in exchange for the shipper committing to provide a specified volume over the contract's duration. According to railroad representatives, the volume commitments negotiated in a contract allow the railroad to more efficiently allocate its resources and ensure consistent revenues. Also, selected shippers told GAO that they can more efficiently manage multiple shipping routes under one contract because of the stability in rates over the duration of the contract. In contrast, tariffs may be preferred for smaller shipments. Despite the volume discounts contracts can offer, some selected shippers said that contracts that include rates for multiple origin-to-destination routes can contain high rates on some routes. This is particularly an issue for shippers that are “captive”—that is, shippers served by a single railroad without an economically viable transportation alternative. Representatives of the four largest freight railroads said they charge what shippers are willing to pay to cover infrastructure costs for the entire rail network. However, according to selected shippers GAO interviewed, combining captive and non-captive routes together in one contract can compel shippers to accept some unreasonable rates. Shippers subject to contract rates they view as unreasonable cannot challenge those rates at the Surface Transportation Board (STB) because contracts are not subject to STB oversight. A railroad official said that a shipper may ask the railroad to switch rates the shipper views as unreasonable to a tariff. However, selected shippers said that tariff rates are generally higher than contract rates, so they are reluctant to forgo a contract with a mix of rates in favor of using a tariff. While the STB process for reviewing tariff rates was designed, in part, to protect captive shippers from unreasonably high rates, selected shippers said the process is complicated, time-consuming, and expensive. In 2016, STB began to reform the tariff review process with the goal of improving its efficiency. |
Historically, a principal concern in noncompetitive contracting situations has been how to ensure that the prices proposed by contractors are fair and reasonable. Recognizing this risk, the Congress enacted the Truth in Negotiations Act in 1962. The act represents the government’s key safeguard against inflated contract prices on noncompetitive contracts. The act requires contractors and subcontractors to provide the government with cost or pricing data supporting their proposed prices and to certify that the data are accurate, complete, and current. If the government later discovers that the contractor submitted data that were not accurate, complete, and current, the act allows the government to pursue remedies, such as a reduction in the contract price. Interest and penalties can also be assessed under certain conditions. These provisions are designed to give the government the information it needs to ensure fair and reasonable contract prices. The negotiation process with certified cost or pricing data can be lengthy, and the documentation requirements for both sides can be extensive. The process starts when the contractor provides estimated costs for subcontracts and materials along with a detailed breakdown of the work to be performed, including estimated manufacturing labor costs, engineering costs, tooling costs, and other direct costs for each segment of the work. As figure 1 shows, DOD contracting officers then review these data along with price analysts from the Defense Contract Management Agency and auditors from the Defense Contract Audit Agency. The government and the contractor then negotiate cost elements to settle on a price. Once this is done, the contractor certifies the data as accurate, complete, and current. DOD may conduct an audit after the contract’s award. When enacted in 1962, the Truth in Negotiations Act did not include an explanation of what constituted an “exceptional case” and it has never been amended to define that term. Up until 1995, the Federal Acquisition Regulation (the implementing regulation) largely mirrored the Truth in Negotiations Act. The waiver provision in the Federal Acquisition Regulation was amended in 1995 to allow contracting officers to waive data when sufficient information was available to determine a fair and reasonable price. However, the regulation still provided little guidance on the circumstances that would warrant a waiver in a particular case. The first sentence of the current provision states that the “head of the contracting activity . . . may, without power of delegation, waive the requirement for submission of cost or pricing data in exceptional cases.”The waiver provision also states that the head of the contracting activity “may consider waiving the requirement if the price can be determined to be fair and reasonable without submission of cost or pricing data.” Aside from stating that a waiver may be considered in this situation, the regulation provides no further guidance on the circumstances that would warrant a waiver. Finally, the regulation includes no other guidance to help agency officials weigh the potential risks and benefits of granting a waiver in a particular case, as opposed to obtaining certified data. The conferees agree that the term “exceptional circumstances” requires more than the belief that it may be possible to determine the contract price to be fair and reasonable without the submission of certified cost and pricing data. For example, a waiver may be appropriate in circumstances where it is possible to determine price reasonableness without cost or pricing data and the contracting officer determines that it would not be possible to enter into a contract with a particular contractor in the absence of a waiver. In response to these concerns, DOD was directed in 1998 to work with appropriate executive branch officials to clarify situations in which an exceptional case waiver may be granted. According to DOD, no actions have been taken to clarify when waivers should be granted. Using DOD’s contract database, we identified 20 waivers valued at more than $5 million each in fiscal year 2000. The total value of these waivers was about $4.4 billion. As table 1 shows, six buying organizations approved these waivers. Five of the contracts included waivers that covered multiple-year purchases. Six waivers that we identified involved large, complicated acquisitions, which combined represented about 94 percent of the dollar value of the waivers we reviewed. (See table 2.) We could not assess the extent to which waivers are being used at DOD because DOD’s contract database is unreliable. However, for the contract actions we examined, we were able to verify data by reviewing the actual contracts and supporting documents. Contract pricing or waiver documents for all of the cases we reviewed stated that sufficient information was available to determine the price to be fair and reasonable without the submission of cost or pricing data and did not cite other circumstances to justify the waivers. This justification complies with the Federal Acquisition Regulation. In three cases, our review found that other factors strongly influenced the decision to waive certified cost or pricing data. These involved purchases for crashworthy fuel systems and combat vehicle track as well as a foreign military sale of F-16 fighter aircraft to Greece. In the crashworthy fuel system purchase, the company’s business model requires the company to sell its products at catalog prices rather than use a traditional government approach based on certified cost or pricing data, which the company never provides. This unique supplier also developed all of its products and maintains a production base exclusively at the company’s expense. In the case of the purchase of combat vehicles track, the company’s commercial accounting system did not segregate unallowable costs from its overhead accounts, and the company did not want to run the risk of government claims and possible damage to its reputation because of the inadvertent failure to exclude such costs from government proposals. As a result, the company would not provide certified data. The Army and the company agreed to reduce general and administrative costs allocated to this buy by 25 percent to compensate for possible unallowable costs. Finally, in the F-16 sale, two approaches were considered. The first called for accepting the price offered by the contractor during a competition between different aircraft types. The second called for traditional negotiations based on the certification of cost or pricing data. The contractor objected to providing certified data, arguing that adequate price competition had occurred. As a compromise, the Air Force waived the certification requirement but obtained and analyzed pricing data from the contractor. Contracting officers responsible for the 20 waivers we reviewed used a variety of techniques and approaches—sometimes a combination of several—to determine whether prices were fair and reasonable. Many of the contracting officers conducted a price analysis. Under a price analysis, the contracting officer reviews the proposed price for the contract without a breakdown of supporting costs. In 11 cases, the contracting officers compared contractors’ proposed prices with prices that had been negotiated previously for the same systems with certified data. In some cases, if a significant amount of time had elapsed since the previous price had been established, the contracting officers adjusted the price to account for inflation and quantity changes. In four cases, contracting officers conducted more thorough analyses using the contractors’ cost data, but the contractors were not required to certify the data as accurate, complete, or current. Under a cost analysis, the contracting officer reviews a breakdown of supporting costs in terms of materials, labor, and various overhead accounts. Such a breakdown, for example, could list various prices for materials as well as anticipated hours and rates for labor. In five cases, a variety of other pricing techniques were employed, including the use of regression analyses, learning curves, and parametric estimates. Table 3 summarizes primary techniques employed on each of the 20 waivers we reviewed. The government was at a higher risk of inflated pricing in situations where there was a lot of uncertainty about the data used to support analyses and a lower risk in situations where there was less uncertainty. Factors that increased uncertainty included changes in the design of the weapon system since a previous purchase, changes in the processes or equipment used to produce the system, or even changes in the amount being ordered by the government. More indirect factors contributing to uncertainty include mergers and acquisitions, cost-cutting measures, or changes in relationships with subcontractors. All of these things can significantly affect the costs of a product. The practice of relying on previously certified data that are fairly old also increased risk—principally because it increased the potential for more uncertainty. In several cases we reviewed, the data relied on were 2 to 3 years old. At times, contracting officers took action to make up for the uncertainties associated with the time elapsed, such as adjusting the price to account for inflation. However, the contracting officers still could not be assured that all other conditions—such as production processes, business processes, subcontractor relationships— affecting the purchase remained the same. One case we identified, the Navy’s purchase of spare parts for Orion radar systems, was particularly risky—not only because the contracting officer relied on 7-year-old data, but the data had never been certified. We also identified factors and practices that helped to minimize risk. Of course, relying on data that were certified fairly recently for systems where conditions had not changed lowered the risk to the government. This occurred in several cases that we reviewed. In other cases, contracting officers employed pricing experts from the Defense Contract Management Agency and the Defense Contract Audit Agency to help them analyze costs and/or prices. Such officials lent substantial expertise and experience to the negotiation process by performing audits and reviews of the contractor’s purchasing systems, estimating systems, overhead rates, and operations in general. In some cases, government and contractor personnel worked collaboratively and effectively within integrated product teams to analyze costs and prices. In doing so, they shared and used the same data to come to a consensus on issues affecting contract price. This arrangement also served to minimize the development of adversarial relationships between the contractor and the government. Another factor that could lower risk is the contractor’s having sound estimating and purchasing systems—ones approved by government organizations. Such systems are integral to producing credible proposals. Nearly all of the contractors in the cases that we reviewed had such systems, and in a few cases, allowed government representatives direct access to the data within the systems. Specific examples highlighting risk factors are provided in the figure below. DOD’s guidance on the waiver process is not adequate. First, DOD does not have guidance that would help clarify for buying organizations what an “exceptional” case might actually entail. The Truth in Negotiations Act does not define exceptional cases and the regulatory guidance is limited. The current guidance states that the head of the contracting activity may consider waiving the requirement if the price can be determined to be fair and reasonable without the submission of cost or pricing data. But the guidance cites only one example of a situation where a waiver may be granted: “if cost or pricing data were furnished on previous production buys and the contracting officer determines such data are sufficient, when combined with updated information.” The trade-offs and complexities involved in making the decision to grant a waiver require more guidance. On the one hand, the certification process greatly lowers the risk of inflated pricing and provides the government with recourse in the event that items are found to be defectively priced. In fact, in fiscal year 2000, Defense Contract Audit Agency audits related to the Truth in Negotiations Act identified potential cost savings of $4.9 billion. On the other hand, the certification process can be costly to both the contractor and the government in terms of time, effort, and money. And there may be times—such as when there is an urgent need for the item or when the same item was purchased very recently using certified data—when the government may be willing to take a greater risk. By developing more detailed guidance, DOD could help buying organizations weigh these trade-offs and avoid using the waiver process as merely a shortcut to getting an item, even an expensive weapon system, quicker and easier. Second, DOD does not have guidance that would help buying organizations draw the line between what type of data and analyses would be acceptable or not and what kinds of outside assistance, such as DOD contracting and pricing experts, should be obtained. Our analysis showed that there was a wide spectrum in the quality of the data and analyses being used. On one end, there were situations where the analysis focused only on the bottom-line price and not the supporting costs and where the data being relied on were exceptionally old. On the other end, were situations where the negotiations were based on data that were very recently certified with little change in quantity. In addition, in some situations, other risk mitigating techniques were employed, such as involving contract and pricing experts. Clearly, it is in DOD’s interest to encourage contracting officers to reduce the risk of inflated pricing as much as possible by conducting more rigorous analyses and taking advantage of DOD’s pricing and contracting expertise. Third, we identified several issues, not covered within existing guidance, where there was some confusion on what the law and regulations allowed. For example, contracting officers’ views differed on whether the government can obtain a waiver that covers only a portion of costs associated with a procurement. In purchasing Apache helicopters, for example, the government, in fact, obtained a partial waiver covering subcontractor costs and recurring labor costs, estimated at $462.6 million of the total $2.3 billion contract. In contrast, in another case, the contracting officer told us that the regulations do not provide for partial waivers. Another question that could be clarified is whether waivers can be applied to planned, but unpriced, contract options in later years. Specifically, under contracts which have options that are not priced or under which the price can be redetermined, it is not clear whether a waiver obtained in the first year of the contract should apply to price negotiations that occur in subsequent years of the contract. This question came up with the Army’s purchase of combat vehicle track from Goodyear Tire and Rubber. In another related situation involving the Army’s purchase of Black Hawk helicopter engines from General Electric, the waiver ultimately covered planned purchases over 5 years under two separate contracting actions. For the majority of its sole-source purchases, DOD minimizes the risk of inflated pricing by requiring its contractors, under the Truth in Negotiations Act, to provide detailed cost or pricing data to support their proposed prices and certify that the data are accurate, complete, and current. But for several billion dollars in contracts, DOD is at a greater risk of inflated pricing because it is waiving the requirement. In some cases, contracting officers still make a considerable effort to reduce risks, such as performing detailed price or cost analyses, involving pricing and contracting experts, and relying on data that were recently certified. By developing guidance to encourage all contracting officers to take such steps and to help buying organizations weigh the decision to grant waivers, DOD could reduce its risk of inflated pricing even further. We recommend that the secretary of defense work with the Office of Federal Procurement Policy to develop guidance to be included in the Federal Acquisition Regulation to minimize the risk of inflated pricing when waivers for certified cost or pricing data are granted to its contractors and subcontractors. This guidance should (1) clarify situations in which an exceptional case waiver may be granted, (2) identify what type of data and analyses are recommended for arriving at a price when waivers are granted, and (3) identify what kinds of outside assistance should be obtained. We also recommend that the secretary develop guidance that clarifies whether the government can obtain a partial waiver and what should be done with contracts that have options that are not priced. We further recommend that the secretary survey buying organizations to assess whether additional specific issues not covered within existing guidance need to be clarified. In providing written comments on a draft of this report, DOD generally agreed with our findings and recommendations. Its only disagreement was with our recommendation to work with the Office of Federal Procurement Policy to incorporate new guidance in the Federal Acquisition Regulation. DOD specifically acknowledged that the age and usefulness of data and analysis should be a concern for contracting officers. In response to our recommendations, DOD intends to develop additional guidance to the contracting community regarding (1) the approval of a waiver of the requirement for cost or pricing data, (2) the types of analyses that should be conducted when waivers are granted, and (3) outside expertise that should be engaged in conducting these analyses. DOD plans to include guidance in a memorandum to the military departments and defense agencies and incorporate it into the next update of its Contract Pricing Reference Guides. DOD also agreed with the need to address partial waivers and waivers on unpriced options. In addition, DOD agreed to survey buying organizations to assess whether specific issues not covered within existing guidance need to be clarified. DOD disagreed with our recommendation to place the revised guidance in the Federal Acquisition Regulation because it believed that such a listing would detract from the application of the best professional judgment by contracting officers. We believe that DOD is taking constructive measures to reduce risks that come with the waiver process. In addition, we appreciate that providing additional guidance outside the Federal Acquisition Regulation will provide a more immediate benefit than amending the regulation. However, it is still appropriate for DOD to work with OFPP and the FAR Council to incorporate its guidance into the Federal Acquisition Regulation since the guidance would help clarify the regulation and since the regulation is the definitive source for contract management. We are sending copies of this report to the secretary of defense; the secretaries of the army, navy, and air force; the director, Office of Management and Budget; the administrator, Office of Federal Procurement Policy; and interested congressional committees. We will also make copies available to others on request. If you have any questions about this report or need additional information, please call me on (202) 512-4841. Key contributors to this report are listed in appendix IV. To meet our objectives, we reviewed 20 waivers valued at more than $5 million each in fiscal year 2000 at six buying organizations. In total, the waiver value of these 20 contracts amounted to about $4.4 billion. These 20 waivers involved an array of buying commands, weapon systems, major contractors, and purchasing circumstances. The DOD contract database was used as the basis to identify sole source, fixed-price weapon system contracts, with more than $5 million in expenditures (or contract actions) in fiscal year 2000. The DOD database includes a variety of contracting actions, such as a basic award of a contract as well as modification of a contract. Modifications could include an exercise of an option to a basic contract or funding of the contract for a specific year on a contract funded on an incremental basis. As a result, in some cases with multiyear buys, the pricing of the contract or modification selected for review occurred before fiscal year 2000. We selected six commands to visit during this review because these commands, based on DOD’s contract database, were the only locations that had individual waivers with more than $5 million in expenditures in fiscal year 2000. These six include the (1) Naval Air Systems Command, (2) Naval Sea Systems Command, (3) Naval Inventory Control Point, (4) Army Tank-Automotive and Armaments Command, (5) Army Aviation and Missile Command, and (6) Aeronautical Systems Center of the Air Force Materiel Command. Because of concerns regarding the reliability of computer-generated data, we also requested the commands to independently review their records to identify any additional waivers meeting these criteria. In total, through the use of the database and independent review process, we identified the 20 contracts with waivers amounting to about $4.4 billion. These six are large buying organizations and visiting these organizations, in our view, gives us visibility into the use of waivers for large contracts nationally in fiscal year 2000. We reviewed the techniques associated with the methods of pricing the contracts. This review included the data used by contracting officers to determine whether the prices were fair and reasonable. To accomplish this review, we reviewed contract files and held discussions with contracting officers at the DOD buying organizations. In addition, we also held discussions with representatives of most of the contractors to obtain information on the orders as well as DOD officials located at contractor plants. We conducted our review between March 2001 and April 2002 in accordance with generally accepted government auditing standards. Below is the waiver provision, which is at section 15.403-1 (c) (4) of the Federal Acquisition Regulation. The head of the contracting activity (HCA) may, without power of delegation, waive the requirement for submission of cost or pricing data in exceptional cases. The authorization for the waiver and the supporting rationale shall be in writing. The HCA may consider waiving the requirement if the price can be determined to be fair and reasonable without submission of cost or pricing data. For example, if cost or pricing data were furnished on previous production buys and the contracting officer determines such data are sufficient, when combined with updated information, a waiver may be granted. If the HCA has waived the requirement for submission of cost or pricing data, the contractor or higher-tier subcontractor to whom the waiver relates shall be considered as having been required to provide cost or pricing data. Consequently, award of any lower-tier subcontract expected to exceed the cost or pricing data threshold requires the submission of cost or pricing data unless— 1. An exception otherwise applies to the subcontract; or 2. The waiver specifically includes the subcontract and the rationale supporting the waiver for that subcontract. In addition to those named above, Erin Baker, Cristina Chaplain, Ken Graffam, Martin Lobo, Ralph Roffo, John Van Schaik, and Paul Williams made key contributions to this report. | Although most federal contracts are awarded through competition, the government also buys unique products and services, including sophisticated weapons systems, for which it cannot always rely on competition to get the best prices and values. Instead, it uses a single source for its procurements. In these cases, contractors and subcontractors provide the government with cost or pricing data supporting their proposed prices and certify that the data submitted are accurate, complete, and current, as required by the Truth in Negotiations Act. This ensures that the government has the data it needs to effectively negotiate with the contractor and avoid paying inflated prices. The government can waive the requirement for certified data in exceptional cases. In these instances, contracting officers use other techniques to arrive at fair and reasonable prices. Using the Department of Defense's (DOD) contract database, GAO found 20 waivers, each valued at more than $5 million, in fiscal year 2000. The total value of these waivers was $4.4 billion. In each case, the contract pricing or waiver documents stated that sufficient information was available to determine the price to be fair and reasonable without the submission of cost or pricing data. There was a wide variety in the quality of the data and analyses being used, from very old to very recent data. Despite the range of techniques employed to arrive at a price, DOD does not have guidance that would help buying organizations determine acceptable data and analyses and what kinds of outside assistance, such as contracting and pricing experts, should be obtained. |
The Coast Guard’s 2006 budget request continues a trend of increasing budgets that began in fiscal year 2002, as figure 1 shows. If the Coast Guard’s full budget request is granted, its funding will have increased by 45 percent in nominal terms in this 5-year period. A major portion of this growth will have occurred in the acquisition, construction, and improvements account, which grew 81 percent in nominal dollars between the fiscal year 2002 actual funds and the fiscal year 2006 requested funds—a $568 million increase. Much of this increase can be attributed to two major acquisition projects—Deepwater and Rescue 21. Deepwater is the Coast Guard’s largest-ever acquisition program. It replaces or modernizes cutters, aircraft, and communications equipment for missions that require mobility, extended presence on scene, and the capability of being deployed overseas. Rescue 21, the Coast Guard’s second largest procurement in fiscal year 2006, will replace the Coast Guard’s current antiquated coastal communication system. The fiscal year 2006 budget request shows a $570 million increase to $8.1 billion, which is an increase of about 11 percent in its discretionary funding over the enacted budget for fiscal year 2005. The majority of the total is for operating expenditures: $5.5 billion. Capital acquisition accounts for another approximately $1.3 billion, and the remainder is primarily for retired pay. (See app. III for more detail on the Coast Guard’s fiscal year 2006 budget accounts.) Much of the additional $570 million over and above the 2005 budget covers such things as mandatory pay increases for current employees and operating expenses for existing programs— many of which relate to homeland security functions. In addition, more than $50 million of the increase would fund new or enhanced initiatives, all of which relate to homeland security. For example, a portion of this funding would be dedicated to increasing maritime patrol aircraft operations, increasing the Coast Guard’s presence in ports, and providing enhanced security for liquefied natural gas transports. Of the nearly $1.3 billion requested for capital projects, $966 million, or 76 percent, would be dedicated to the Deepwater acquisition, while $101 million would be dedicated to Rescue 21. By comparison with the pattern of budget increases, performance results—indicators that track a program’s progress from year to year— have been more mixed in terms of the number of performance targets met each year. (See app. IV for a detailed discussion of the Coast Guard’s performance measures and results.) The Coast Guard has a key performance target—the goal it aims to achieve each year—for 10 of its 11 programs. For search and rescue, for example, its target is to save the lives of at least 85 percent of mariners in distress. For the 8 programs with performance results through fiscal year 2004, the Coast Guard met or exceeded its targets in 4—a decline from the 2003 results, when the Coast Guard met 6 of these targets (see fig. 2). Such changes can involve relatively small shifts in results. For example, in fiscal year 2004, 96.3 percent of domestic fishermen were found to be in compliance with regulations, compared with 97.1 percent the year before—but the percentage for fiscal year 2004 was below the Coast Guard’s target of 97.0 percent, while the percentage for fiscal year 2003 was above it. As we have reported in the past, it is difficult to link spending and resource allocations to performance and results, because many other factors also are at work. For example, one of the Coast Guard’s measures—the number of incursions into U.S. fishing grounds by foreign fishing vessels—is affected by oceanic and climatic shifts that can cause fluctuations in the migrating patterns of fish. The number of foreign vessels drawn to U.S. waters could be affected by these fluctuations. In addition, the Coast Guard is still developing its performance measures and targets for its primary homeland security program, so this major reason for funding increases is not yet reflected in the results. These complicating factors suggest caution in attempting to read too much into the fiscal year 2004 drop. Nevertheless, attention to these trends over the long term is important, as a way to help ensure that taxpayer dollars are spent wisely. One of the Coast Guard’s fiscal year 2006 priorities involves implementing a maritime strategy for homeland security. Major portions of this endeavor are heavily influenced by the requirements of the Maritime Transportation Security Act (MTSA) of 2002. We have reviewed the Coast Guard’s response to a number of these requirements, and our findings have implications for several aspects of the budget request. MTSA seeks to establish a comprehensive security regime for the nation’s ports—including planning, personnel security, and careful monitoring of vessels and cargo—and charges the Coast Guard with lead responsibility for implementing this regime. Since MTSA was enacted, the Coast Guard has worked to address vulnerabilities by spurring the development of meaningful security plans for thousands of facilities and vessels in the nation’s ports. The Coast Guard has taken many other actions as well, including establishing area maritime security committees to improve information sharing, increasing port presence through increased security patrols, enhancing intelligence capabilities by establishing field intelligence teams in ports, and beginning to implement an electronic identification system for vessels in the nation’s ports. As we have reported, the Coast Guard deserves credit for taking fast action on so many MTSA security provisions at once, especially with regard to MTSA’s aggressive requirement that regulated facilities and vessels have security plans in place by July 2004. However, the combination of so many reforms and an aggressive schedule posed a daunting challenge, and our review of Coast Guard efforts to meet these requirements showed some areas for improvement where we have made recommendations—most notably the following three from reports issued in 2004. Automatic Identification System (AIS) has potential for cost savings. National development of this system, which identifies vessels traveling to or through U.S. waters, is an important step in the overall effort to increase port safety and security. The Coast Guard faced several key decisions to determine AIS’s technical requirements, waterway coverage, and vessels to be equipped with identification equipment. Estimates to establish such a system, however, were well above funding levels. We thought the goals of the system might be achieved more quickly and the costs to the federal government reduced by pursuing cost-sharing options. Consequently, we recommended that the Coast Guard seek and take advantage of partnerships with organizations willing to develop AIS systems at their own expense. Port security assessments could be more useful. The port security assessment program is intended to assess port vulnerabilities and security measures in the nation’s 55 most economically and militarily strategic ports. Our review showed that while some improvements were made, the Coast Guard risked producing a system that was not as useful as it could have been because its approach lacked a defined management strategy, specific cost estimates, and a clear implementation schedule. A major factor of the program—a computer-based geographic information system that would provide information to personnel in charge of port security— was developed in such a way that gaps in port security postures could be overlooked. We recommended that the Coast Guard define and document the functional requirements for this computer system and develop a long- term project plan for the system and for the port security assessment program as a whole. The Coast Guard’s strategy for conducting oversight and compliance inspections of facilities and vessels could be improved. Because the program was new, we recommended that the Coast Guard undertake a formal evaluation after the first round of inspections and use the results to improve the program. The evaluation was to include the adequacy of security inspection staffing, training, and guidance. To improve the program strategy, we also recommended that the Coast Guard clearly define the minimum qualifications for inspectors and link these qualifications to a certification process, as well as consider unscheduled and unannounced inspections, and covert testing as a way to ensure that the security environment at the nation’s seaports met the nation’s expectations. The Coast Guard agreed with many of our recommendations and has made progress in implementing some of them, but the remaining issues have implications for the availability of funds or the effectiveness with which available funds are spent. AIS. Coast Guard officials have taken a number of steps to encourage stakeholder participation, although they have not formally sought AIS partners to date. For example, the Coast Guard has a contract with PETROCOMM (a provider of communications services in the Gulf of Mexico) to provide locations, maintenance, and data services for several AIS base stations on offshore platforms in the Gulf of Mexico. The Coast Guard believes that it is too early to consider partnerships beyond these initial efforts, because the Coast Guard is still developing operational requirements for AIS systems and vetting these requirements with stakeholders and Coast Guard field units. However, Coast Guard officials also reported that in their discussions with private parties, these parties have shown little interest in shouldering any of the financial burden associated with achieving AIS capability. The Coast Guard estimates that the installation of AIS nationwide could cost nearly $200 million. The fiscal year 2006 budget requests $29.1 million for this project, in addition to the $48 million previously enacted ($24 million per year in fiscal years 2004 and 2005)—leaving a substantial sum to be financed. Port security assessments. Coast Guard officials said they are working with the Department of Homeland Security to determine the focus and scope of the fiscal year 2006 port assessments and are taking into consideration the progress being made by ports to identify shortcomings and improve security. However, the Coast Guard continues to move forward with the overall program, as well as the geographic information system, without a plan that clearly indicates how the program and its information component will be managed, what they are expected to cost, or when the various work steps should be completed. The lack of a plan, in our view, increases the risk that the program will be unsuccessful. In response to our recommendation, the Coast Guard has indicated that it will develop a long-term plan for the port security assessment program but they did not indicate when this effort will begin or when they expect a plan to be completed. Strategy for ensuring facility and vessel compliance. The Coast Guard has taken a number of actions but has not focused its resources on doing unscheduled or unannounced spot checks to verify whether domestic vessels are complying with requirements. We continue to believe that without unscheduled inspections, vessel owners and operators can mask security problems by preparing for the annually announced inspections in ways that do not represent the normal course of business. Unannounced inspections are a way of ensuring that planning requirements translate into security-conscious behavior. A second Coast Guard priority is to enhance mission performance. Many Coast Guard personnel and assets are involved in performing multiple missions. For example, Coast Guard cutters and crews may be involved with fisheries patrols, distress calls, oil spills, stopping and boarding vessels of interest, and many other tasks. In fiscal years 2005 and 2006, the Coast Guard plans to continue developing several initiatives that agency officials believe will yield increased performance across multiple Coast Guard missions over time. Three initiatives, in particular, deserve mention. These are a new coastal communication system, called Rescue 21; a new field command structure, called Sectors; and efforts to improve readiness at multimission stations that conduct search and rescue as well as other missions. All three efforts carry some risk and will merit close attention. Rescue 21. The Coast Guard has resolved some initial development problems that delayed the implementation of this new coastal command and control communication system and is now poised to move forward again, with a fiscal year 2006 budget request of $101 million. According to Coast Guard officials, Rescue 21 can improve coastal command and control communications and interoperability with other agencies, helping to improve not only search and rescue efforts but also other missions such as illegal drug and migrant interdiction. The program is composed of very- high-frequency-FM radios, communication towers, and communication centers. Rescue 21 was originally scheduled to be ready for operational testing by September 2003, but this was delayed because of problems in developing system software. Operational testing of this software has been completed. The program is now set—once additional Coast Guard and DHS approvals are obtained—to move into its next phase of production, and the Coast Guard anticipates that the program will be operational by the end of 2007. According to the Coast Guard, one risk that remains in moving ahead with Rescue 21 involves locating sites for about 330 towers that must be built. The Coast Guard must locate these towers in accordance with the requirements of the National Environmental Policy Act of 1969 (NEPA), as amended, which requires federal agencies to prepare an environmental impact statement for major federal actions that may significantly affect the quality of the human environment. Towers can have environmental effects; for example, when they are built in migratory bird locations, birds can fly into the towers or their supporting wires. Additionally, for effective communications, each tower must be placed in a way that one tower’s coverage meets the next tower’s coverage without interference. Thus, if one tower must be moved for environmental reasons, neighboring towers may also have to be moved—leading to a potential for schedule slippage, if additional sites must be identified and developed. The NEPA process represents the Rescue 21 program’s greatest risk, according to a program official. Sectors. This is a new field command structure that will unify previously disparate Coast Guard units such as air stations, groups, and marine safety offices into integrated commands. This effort is a budget neutral effect in the fiscal year 2006 request, but it bears attention for operational effectiveness reasons. The Coast Guard is making this change to improve mission performance through better coordination of Coast Guard command authority and resources such as boats and aircraft. Under the previous field structure, for example, a marine safety officer who had the authority to inspect a vessel at sea or needed an aerial view of an oil spill as part of an investigation would often have to coordinate a request for a boat or aircraft through a district office, which would obtain the resource from a group or air station. Under the Sector realignment, these operational resources will be available under the same commanding officer. To date, 8 sectors have been established, with approximately 28 to be established by the end of 2006. According to Coast Guard personnel, the realignment is particularly important for meeting new homeland security responsibilities, and will facilitate the Coast Guard’s ability to manage incidents in close coordination with other federal, state, and local agencies. While the establishment of Sectors appears to be an important step that could positively affect the Coast Guard’s mission performance, the Coast Guard is likely to face a number of implementation challenges that it will need to overcome to help ensure success. First, Sectors change a long- standing cultural divide within the agency. This divide has separated those personnel who typically operate aircraft and boats from those personnel who typically enforce marine safety, security, and environmental protection laws. Second, it has implications for alignment above the field operations level as well. Realignment is likely to be needed at the district office and headquarters levels to help ensure that management misalignments among these levels do not pull the field reorganization off track. Third, it will likely require training, such as taking steps to ensure that senior commanders are aware of key issues critical for decision making across the various Coast Guard mission areas. Coast Guard officials acknowledge these challenges but believe that the culture challenge will be overcome in time as a result of increased familiarity and training. They also acknowledged that further realignments at the district and headquarters levels are likely to be needed over time and that efforts are under way to implement training changes. Multimission stations. Another area where the Coast Guard has an opportunity to improve mission performance involves its 188 multimission stations. These stations located along the nation’s coastlines and interior waterways have been the mainstay of one of the Coast Guard’s oldest missions—finding and rescuing mariners in danger. In 2001, after a series of search and rescue mishaps, the Coast Guard began efforts to improve station readiness, which had been declining for more than 20 years. This included reconfiguring operations and bolstering resources in four areas— staffing, training, boats, and personal protection equipment used by personnel during operations, such as life vests and survival suits. This effort was complicated by the new and increased homeland security responsibilities that stations assumed after the terrorist attacks of September 11. Today, 4 years after efforts began to improve station readiness, there have been operational improvements in staffing, training, boats, and personal protection equipment, as well as increases in resource levels at stations. However, even though readiness concerns have been mitigated to some extent, the stations have still been unable to meet standards and goals relating to staffing, boats, and equipment, which indicates that the stations are still significantly short of desired readiness levels in some areas. For example, even though station staffing has increased 25 percent since 2001, station personnel continue to work significantly longer hours than are allowed under the Coast Guard’s work standards. To address continued readiness concerns, actions are needed in two areas, and the Coast Guard says that it has such efforts underway. Currently, the Coast Guard does not have an adequate plan in place for achieving and assessing readiness in its new post-September 11 operating environment. The Boat Forces Strategic Plan—the Coast Guard’s strategy for maintaining and improving essential multimission station capabilities over the next 10 years—is the agency’s main tool for measuring progress in meeting station readiness requirements, but it has not been updated to reflect increased homeland security responsibilities. However, Coast Guard officials recently reported that they will update the plan to reflect its homeland security mission and identify actions taken and results achieved. Second, the Coast Guard is operating under interim homeland security guidelines, which establish recommended security activities for field units according to each maritime security threat level. Coast Guard officials said they would incorporate measurable station readiness goals into the plan. The Coast Guard plans to complete these efforts in the next 6-9 months. The third Coast Guard priority involves the single largest and most complex acquisition program in the agency’s history—a project designed to improve the mission performance of the range of cutters and aircraft that currently conduct the agency’s offshore missions. We have previously reported on the risky approach for this acquisition, and although progress has been made to address our past recommendations, the risks still remain substantial. As it undergoes a transformation to these new or upgraded assets, the Coast Guard is also faced with sustaining its legacy assets to ensure that they can continue to perform the Coast Guard’s missions until new or upgraded assets are in place. Revisions to the Coast Guard’s mission requirements for Deepwater, slippages in the acquisition schedule, and limited information about the condition of and likely costs for maintaining the legacy assets all highlight the need for continued attention to this area. In 1996, the Coast Guard initiated a major recapitalization effort—known as the Integrated Deepwater System—to replace or modernize the agency’s deteriorating aircraft and cutters. These legacy assets are used for missions that require mobility, extended presence on scene, and the capability of overseas deployment. Examples of such missions include interdicting illegal drug shipments or attempted landings by illegal aliens, rescuing mariners in difficulty at sea, protecting important fishing grounds, and responding to marine pollution. The Deepwater fleet consists of 187 fixed-wing aircraft and helicopters, and 88 cutters of varying lengths. As currently designed, the Deepwater program replaces some assets (such as deteriorating cutters) with new ones while upgrading other assets (such as some types of helicopters) so that all of the assets can meet new performance requirements. In an effort to maintain its existing assets until the Deepwater assets are in place, the Coast Guard is conducting extensive maintenance work. Notwithstanding extensive overhauls and other upgrades, a number of the cutters are nearing the end of their estimated service lives. Similarly, while a number of the deepwater legacy aircraft have received upgrades in engines, operating systems, and radar and sensor equipment since they were originally built, they too have limitations in their operating capabilities. For example, the surface search radar system on the HC-130 long-range surveillance aircraft is subject to frequent failures and is quickly becoming unsupportable. Flight crews use this radar to search for vessels in trouble and to monitor ships for illegal activity, such as transporting illicit drugs or illegal immigrants. When the radar fails, flight crews are reduced to looking out the window for targets, greatly reducing mission efficiency and effectiveness. A flight crew in Kodiak, Alaska, described this situation as being “like trying to locate a boat looking through a straw.” We have been reviewing the condition of Coast Guard Deepwater assets for a number of years, and our work has shown that a need exists for substantial replacement or upgrading. We have additional work underway this year regarding the status of Deepwater assets, and will be testifying on this work next month. While we agree that the case for replacing and upgrading the Coast Guard’s legacy assets is compelling, the contracting strategy the agency is using to conduct this acquisition carries a number of inherent risks. This strategy relies on a contractor—called the system integrator—to identify and deliver the assets needed to meet a set of mission requirements the Coast Guard has specified, using tiers of subcontractors to design and build the actual assets. The resulting program is designed to provide an improved, integrated system of aircraft, cutters, and unmanned aerial vehicles to be linked effectively through systems that provide command, control, communications, computer, intelligence, surveillance, reconnaissance, and supporting logistics. However, from the outset, we have expressed concern about the risks involved with this approach because of its heavy reliance on a steady funding stream over several decades and the potential lack of competition to keep contracting costs in line. These risks have had tangible effects, including rising costs and slipped schedules. Early on in our reviews of the program, we expressed concern that the Coast Guard risked schedule slippages and cost escalation if project funding fell short of planned funding levels. These concerns materialized in the first 2 years of the program, when appropriated funding was $125 million less than planned for. And, although funding in the fourth year of the program (fiscal year 2005) exceeded the Coast Guard’s request by about $46 million, the early shortfalls, according to the Coast Guard, resulted in schedule slippage and led to increases in the total projected costs for the program. As of spring 2004, it was estimated that an additional $2.2 billion (in nominal dollars) would be needed to return the program to its original implementation schedule. In addition, there is clear evidence that the asset delivery schedule has also slipped. For example, under Deepwater’s original schedule, the first major cutter, the National Security Cutter was due to be delivered in 2006; the current schedule indicates that it will now not be delivered until 2007. Similarly, the first nine Maritime Patrol aircraft were due to be delivered in 2005; now only two will be delivered in 2007. When we reviewed the Deepwater program again last year, we found that, on many fronts, the Coast Guard was not doing enough to mitigate these risks. For example, we found that well into the contract’s second year, key components needed to manage the program and oversee the system integrator’s performance had not been effectively implemented. We also reported that the degree to which the program was on track could not be determined, because the Coast Guard was not updating its schedule. We detailed needed improvements in a number of areas, shown in table 1. The Coast Guard agreed with nearly all of our recommendations and has since made progress in implementing some of them. In most cases, however, while actions are under way to address these concerns, management challenges remain that may take some time to fully address. Here are some examples. Strengthening integrated product teams. These teams, the Coast Guard’s primary tool for managing the program and overseeing the contractor, consist of members from subcontractors and the Coast Guard. In 2004, we found these teams often lacked training and in several cases lacked charters defining clearly what they were to do. Most now have charters setting forth the team’s purpose, authority, and performance goals, among other things, and more training is now being provided. However, roles and responsibilities in some teams continue to be unclear, and about one-third of team members have yet to receive entry-level training. Holding the systems integrator accountable for competition. The Coast Guard has taken a number of steps to improve cost control through competition. For example, to improve competition among second-tier suppliers, Coast Guard officials said they will incorporate an assessment of the steps the system integrator is taking to foster competition at the major subcontractor level as one of the factors they take into account in deciding whether to award the first contract option. Besides the risks noted in table 1, the program also bears careful watching because it is still being affected in midcourse by the Coast Guard’s additional homeland security responsibilities. Planning for the Deepwater program had been set in motion before the terrorist attacks of September 11, and while the initial program included consideration of homeland security responsibilities, these responsibilities have grown considerably in the interim. In March 2004, the Coast Guard developed a revised mission needs statement (MNS) that indicated that current specifications for Deepwater assets lacked some functional capabilities needed to meet mission requirements. The MNS was approved by DHS in January 2005. According to the Coast Guard, some of the functional capabilities now deemed to be required include the following: Rotary wing airborne use of force and vertical insertion/vertical delivery Greater speed, a larger flight deck, and automated defensive and weapons systems for the National Security Cutter and Offshore Patrol Cutter classes; A common operating picture (COP) for the entire Coast Guard (and maritime ports of a unified Department of Homeland Security COP), an interoperable network to improve performance in all mission areas, and a Secure Compartmentalized Information Facility for improved intelligence capabilities; and Chemical, biological, radiological defense and decontamination capability While we have not conducted an analysis of the likely cost and schedule impact of the revised MNS requirements, they undoubtedly will have an effect on cost and schedule. The Coast Guard’s own estimates identified in the March 2004 MNS show an increased acquisition cost for the original 20-year acquisition of about $1 billion. According to the Coast Guard, the revised MNS requirements and associated cost and schedule information have been forwarded to the Department of Homeland Security (DHS) and the Office of Management and Budget for approval. As of this time, the implementation plan has not been approved. These issues point to the need for continued and careful monitoring of the Deepwater acquisition program both internally and externally. One positive development in this regard involves the Coast Guard’s efforts to update the Deepwater acquisition schedule—action that we suggested in our June 2004 report. The original 2002 schedule had milestone dates showing when work on an asset would begin and when delivery would be expected, as well as the integrated schedules of critical linkages between assets, but we found that the Coast Guard was not maintaining an updated and integrated version of the schedule. As a result, the Coast Guard could not demonstrate whether individual components and assets were being integrated and delivered on schedule and in critical sequence. While as late as October 2004 Deepwater performance monitors likewise expressed concern that the Coast Guard lacked adequate visibility into the project’s status, the Coast Guard has since taken steps to update the outdated schedule, and has indicated that it plans to continue to update the schedule—monthly for internal management purposes, and semi-annually to support its budget planning efforts. We think this is an important step toward improving the Coast Guard’s management of the program because it provides a more tangible picture of progress, as well as a baseline for holding contractors accountable. And, as we have said in the past on numerous occasions, we will continue to work closely with the Coast Guard to monitor how risks are mitigated. Although the Coast Guard expects to upgrade a number of its legacy assets for use in the Deepwater program, a substantial portion of its legacy assets—particularly cutters—are scheduled to be replaced. Until their replacements are available, however, many of the cutters will need to be kept in service so that the Coast Guard can continue to perform its missions. Our visits to field locations and conversations with Coast Guard operations and maintenance personnel clearly indicated that the maintenance of these assets is already taking increasingly more time and effort. For example, air station maintenance personnel indicated that aircraft are being subjected to additional corrosion-related problems. To address these problems, air station maintenance personnel at the locations we visited said they have instituted additional measures, such as washing and applying fluid film to the aircraft prior to each deployment. Similar accounts were told by personnel working on cutters. For example, officers of the 270-foot cutter Northland told us that because of dated equipment and the deteriorating condition of the vessel’s piping and other subsystems, crewmembers have to spend increasingly more time and resources while in port to prepare for the cutter for the next deployment. While we could not verify these increases in time and resources because of limitations in the Coast Guard’s data, the need for increasing amounts of maintenance was a message we consistently heard from operations and maintenance personnel. The Coast Guard is aware that keeping these legacy assets mission capable will likely require an additional infusion of funds for some assets that are scheduled to be replaced. Since 2002, the Coast Guard has annually created a compendium that consolidates information about projects needed to maintain and sustain legacy assets. The Coast Guard uses this compendium as a tool for setting priorities and planning budgets. The most recent compendium (for fiscal year 2006), lists more than $1 billion worth of upgrades to the Deepwater legacy assets. The planned upgrades identified in the compendium that have been approved and received initial funding account for an estimated $856 million the Coast Guard anticipates it will need to complete those projects. In addition, the compendium lists another estimated $409 million in sustainment projects for the other legacy assets for which funding has not been requested. If the condition of these assets continues to deteriorate or replacement assets are further delayed, this additional funding will likely be needed. We are not questioning the Coast Guard’s decisions about which projects within the compendium should receive priority. We believe it is important, however, for the Coast Guard to make Congress aware of the magnitude of the potential funding needs for sustaining the assets that are eventually scheduled for replacement. Given the schedule slippages we have seen and the continued possibility that Deepwater requirements may yet change, this information will be important to determine a thoughtful and accurate estimate of future maintenance budget needs. One planning effort under way within the Coast Guard illustrates the kinds of considerations that may be needed with regard to these assets. This effort is being undertaken by the Coast Guard’s Pacific Area Command, which to accomplish its missions, relies on 378-foot cutters—the first asset scheduled to be replaced under the Deepwater program. Under the original Deepwater proposal, the final 378-foot cutter was to be decommissioned in 2013, but by 2005, that date had slipped to 2016. To help keep these cutters running through 2016, Pacific Area Command officials are considering such strategies as designating some of the 378- foot cutters as capable of performing only certain missions, rather than attempting to keep them all fully capable of performing all missions. Even so, the Pacific Area Commander told us that in order for the 378-foot cutters to be properly maintained until their replacements become operational; the Coast Guard will have to provide more focused funding. So far, the Coast Guard’s budget plans and requests do not address this potential need. Over the past several years, the Coast Guard has been in the vortex of the nation’s response to homeland security concerns. It has been charged with many new responsibilities related to ports and to marine security in general, and from the outset, we have often used the word “daunting” to describe the resulting tasks. In addition, expectations continue that the Coast Guard will be able to rescue those in distress, protect the nation’s fisheries, keep vital marine highways operating efficiently, and respond effectively to marine accidents and natural disasters. Congress has acknowledged that these added responsibilities carry a price tag and has, through the appropriations process, provided substantially more money for the job. As these efforts begin to move into a more mature phase, allowing lessons that can already be learned to better inform judgments about the future, it is increasingly important to explore ways to enhance mission effectiveness while stretching taxpayer dollars for maximum effectiveness. This is particularly true in the current budget climate. While we have found the Coast Guard to be a willing participant in such efforts, the agency’s focus on achieving all of its missions can make it difficult to carry through with the many intermediate steps that may be needed to keep management problems to a minimum. We think the issues we have highlighted are potential areas for ongoing congressional attention, and we will continue to work with the Coast Guard on them. Madame Chair and Members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. For information about this testimony, please contact Margaret Wrightson, Director, Homeland Security and Justice Issues, at (415) 904-2200, or wrightsonm@gao.gov. Other individuals making key contributions to this testimony include Joel Aldape, Jonathan Bachman, Steve Calvo, Christopher Conrad, Adam Couvillion, Michele Fejfar, Barbara Guffy, Geoffrey Hamilton, Christopher Hatscher, Samuel Hinojosa, Dawn Hoff, Julie Leetch, Dawn Locke, Michele Mackin, Sara Margraf, Stan Stenersen, and Randall Williamson. To provide a strategic overview of the President’s fiscal year 2006 budget request for the Coast Guard, focusing on several areas of particular congressional interest, we reviewed the Coast Guard’s Congressional-stage budget and other financial documents provided by the Coast Guard. We also interviewed Coast Guard headquarters officials familiar with the Coast Guard’s budget and acquisition processes. To determine the status of the Coast Guard’s performance measures and results, we reviewed Coast Guard performance data and performance documentation. We also obtained confirmation from knowledgeable Coast Guard officials that the performance data sources and the systems that produced them have not changed since our 2003 data reliability analysis. We determined that Coast Guard performance measures are sufficiently reliable for the purposes of this testimony. To determine the status of key outstanding Coast Guard recommendations, we reviewed past GAO reports and testimonies related to the Coast Guard and identified the GAO recommendations contained in those reports. In addition, we consulted with GAO staff who performed the work that resulted in the recommendations and interviewed Coast Guard headquarters officials regarding the status of the recommendations—including any progress made to implement them. We also obtained and reviewed relevant documents from the Coast Guard. To assess the Coast Guard’s recapitalization efforts, we analyzed data and condition measures used by the Coast Guard for determining deepwater legacy assets’ condition, reviewed Coast Guard actions to maintain and upgrade the legacy assets, and assessed the improvements the Coast Guard is making in its management of the Deepwater acquisition. We will be following up this testimony with a written report that will contain detailed information related to the condition of deepwater legacy assets, and the actions the Coast Guard is taking to maintain and upgrade them. As part of the follow-on report we will also provide more detailed information on the Coast Guard’s management of the Deepwater program. This testimony is based on published GAO reports and briefings, as well as additional audit work that was conducted in accordance with generally accepted government auditing standards. We conducted our work for this testimony between February and March 2005. Coast Guard: Station Readiness Improving, but Resource Challenges and Management Concerns Remain (GAO-05-161, January 31, 2005). Maritime Security: Better Planning Needed to Help Ensure an Effective Port Security Assessment Program (GAO-04-1062, September 30, 2004). Maritime Security: Partnering Could Reduce Federal Costs and Facilitate Implementation of Automatic Vessel Identification System (GAO-04-868, July 23, 2004). Maritime Security: Substantial Work Remains to Translate New Planning Requirements into Effective Port Security (GAO-04-838, June 30, 2004). Coast Guard: Deepwater Program Acquisition Schedule Update Needed (GAO-04-695, June 14, 2004). Coast Guard: Station Spending Requirements Met, but Better Processes Needed to Track Designated Funds (GAO-04-704, May 28, 2004). Coast Guard: Key Management and Budget Challenges for Fiscal Year 2005 and Beyond (GAO-04-636T, April 7, 2004). Coast Guard: Relationship between Resources Used and Results Achieved Needs to Be Clearer (GAO-04-432, March 22, 2004). Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight (GAO-04- 380, March 9, 2004). Coast Guard: New Communication System to Support Search and Rescue Faces Challenges (GAO-03-1111, September 30, 2003). Maritime Security: Progress Made in Implementing Maritime Transportation Security Act, but Concerns Remain (GAO-03-1155T, September 9, 2003). Coast Guard: Actions Needed to Mitigate Deepwater Project Risks (GAO- 01-659T, May 3, 2001). Coast Guard: Progress Being Made on Deepwater Project, but Risks Remain (GAO-01-564, May 2, 2001,). Coast Guard’s Acquisition Management: Deepwater Project’s Justification and Affordability Need to Be Addressed More Thoroughly (GAO/RCED-99-6, October 26, 1998). In addition to operating expenses and acquisition, construction, and improvements, the remaining Coast Guard budget accounts include areas such as environmental compliance and restoration, reserve training and oil spill recovery. (See table 2 below.) Table 3 shows a detailed list of performance results for the eight programs for which the Coast Guard has fiscal year 2001 through 2004 data. Shaded entries in the table indicate those years that the Coast Guard reported meeting its target; unshaded entries indicate those years that the Coast Guard reported not meeting its target. The table also shows that there are three programs for which performance results are pending and data is not available across the four-year period. Each program is discussed in more detail below. Foreign fish enforcement. The performance results for foreign fish enforcement, which indicate the number of foreign vessel incursions into the United States Exclusive Economic Zone (EEZ), has experienced fluctuations from 152 incursions to 250 incursions in the last 4 years. Such fluctuations can be due to oceanic and climatic shifts that affect the migratory patterns of important fish stocks, and limited Coast Guard assets, which the Coast Guard believes are unable to cover the entire 3.4 million square mile EEZ. We reported previously that performance measures for foreign fish may not reflect agency efforts. Because EEZ encroachments can be affected by oceanic and climatic shifts that can cause significant fluctuations in the migratory patterns of fish, they could increase (or decrease) as fishermen follow their intended catch across EEZ boundaries. According to Coast Guard officials, this type of migratory factor can influence the number of encroachments in a given year. Consequently, the Coast Guard has added two additional measures to foreign fish that focus on interception and interdiction. These two submeasures are not reflected in the Coast Guard’s foreign fish performance goal. However, the Coast Guard believes that they help it to better distinguish between those incursions that it is able to identify (for example, with a C-130 it can identify a foreign fishing vessel incursion) and those incursions that it can actually respond to (for example, 378-foot cutter can interdict a stray foreign fishing vessel). Living marine resources. The performance measure for living marine resources—defined as the percentage of fishermen complying with federal regulations—has varied from 96.3 percent to 98.6 percent between fiscal years 2001 to 2004. According to Coast Guard performance documents, the agency missed the fiscal year 2004 target because of poor economic conditions in the U.S. shrimp fisheries, which appear to have made U.S. fishermen in the Southeast region more willing to violate regulations in order to maintain operations. However, the Coast Guard reported that while the number of fishermen in compliance decreased slightly, its total number of fishery boardings (4,560) was the highest number of boardings since 2001. Ice operations. To meet this performance target, the Coast Guard’s ice operations program must keep winter waterway closures under 8 days per year for severe winters and under 2 days per year for average winters. In fiscal year 2004, the Coast Guard reports missing its target for an average winter with 4 days of waterway closures instead of 2 or less. The Coast Guard reports that it extended the ice-breaking season for an additional 10 days and because of worsened winter conditions within that period, its icebreaking assets were challenged to provide services in nine critical waterways of the Great Lakes. In fiscal year 2006, the Coast Guard plans to complete the construction of the Great Lakes Icebreaker, which will significantly improve icebreaking on the Great Lakes. Defense readiness. Defense readiness, as measured by the percentage of time units that meet combat readiness status at a C-2 level, improved from 67 percent to 78 percent during fiscal years 2001 to 2003 but decreased to 76 percent in fiscal year 2004 due to a personnel shortage according to the Coast Guard. The Coast Guard identified its need to supply personnel for the war in Iraq as the main reason for failing to meet this performance target. To support fiscal year 2004 efforts in Iraq, the Coast Guard provided personnel for six patrol boats, one patrol boat support unit, one port security unit, four law enforcement detachments, as well as two ships and cutters. Undocumented migrant interdiction. The Coast Guard reported that it achieved its fiscal year 2004 performance goal of interdicting or deterring 87 percent of undocumented aliens attempting to enter the United States. The undocumented migrant interdiction performance measure assesses the percentage of migrants interdicted or deterred on maritime routes. In 2004, the Coast Guard identified 4,761 successful arrivals out of an estimated threat of 37,000 migrants. In fiscal year 2003, the Coast Guard missed this target, interdicting or deterring 85.3 percent of migrants. Since 2001, the greatest percentage of migrants deterred or interdicted—88.3 percent—was achieved in fiscal year 2002. Search and rescue. The Coast Guard’s performance in this area, as measured by the percentage of mariners’ lives saved from imminent danger, was 86.8 percent, above the goal of 85 percent for fiscal year 2004. The Coast Guard identified continuing improvements in response resources and improvements made in commercial vessel and recreational boating safety as the main reasons for meeting the target. Marine environmental protection. The Coast Guard measures the marine environmental protection target as the 5-year average of oil and chemical spills greater than 100 gallons per 100 million tons shipped. Since fiscal year 2001, the reported average number of oil and chemical spills has dropped from 40.3 to 22.1 in fiscal year 2004. The Coast Guard identified its prevention, preparedness, and response programs—including industry partnerships and incentive programs—as reasons for the drop. Aids to navigation. The aids to navigation program performance measure—which assesses the total number of collisions, allisions, and groundings—improved to 1,876 in fiscal year 2004, more than a 6 percent improvement over fiscal year 2003’s total of 2,000, and below the target of 1,923. (Since the aim is to prevent these accidents, a lower number than the target represents attaining the goal). The number has varied from year to year, but has remained below or at the target in each of the 4 years. The Coast Guard attributes this success to a multifaceted system of prevention activities, including radio aids to navigation, communications, vessel traffic services, dredging, charting, regulations, and licensing. Marine safety. The marine safety measure, a 5-year average of passenger and maritime deaths and injuries, decreased from 1,651 in fiscal year 2001 to 1,307 in fiscal year 2003. The Coast Guard is currently waiting on the states to supply recreational boating numbers in order to release their total performance result for calendar year 2004. Coast Guard officials identified ongoing inspection, investigation, prevention, and response programs, as well as work with industry, states, and volunteers to promote boating safe operations, as factors in reducing the number of deaths. Illegal drug interdiction. The illegal drug interdiction performance measure—the rate at which the Coast Guard seizes cocaine—is currently being modified by the Coast Guard. The Coast Guard expects their performance results will be available in April 2005. Ports, waterways, and coastal security. The Coast Guard is currently developing a performance measure for ports, waterways, and coastal security. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Coast Guard's budget has steadily increased in recent years, reflecting the agency's need to address heightened homeland security responsibilities while also addressing traditional programs such as rescuing mariners in distress and protecting important fishing grounds. The fiscal year 2006 budget request, which totals $8.1 billion, reflects an increase of $570 million over the previous year. GAO has conducted reviews of many of the Coast Guard's programs in recent years, and this testimony synthesizes the results of these reviews as they pertain to three priority areas in the Coast Guard's budget: (1) implementing a maritime strategy for homeland security, (2) enhancing performance across missions, and (3) recapitalizing the Coast Guard, especially the Deepwater program--an acquisition that involves replacing or upgrading cutters and aircraft that are capable of performing missions far out at sea. GAO's observations are aimed at highlighting potential areas for ongoing congressional attention. The Maritime Transportation Security Act of 2002 charged the Coast Guard with many maritime homeland security responsibilities, such as assessing port vulnerabilities and ensuring that vessels and port facilities have adequate security plans, and the Coast Guard has worked hard to meet these requirements. GAO's reviews of these efforts have disclosed some areas for attention as well, such as developing ways to ensure that security plans are carried out with vigilance. The Coast Guard has taken steps to deal with some of these areas, but opportunities for improvement remain. The Coast Guard has three efforts under way that hold promise for enhancing mission performance but also merit ongoing attention. One is a new coastal communication system. The fiscal year 2006 budget request includes $101 million to move the system forward. A successful system would help almost all Coast Guard missions, but to develop it the Coast Guard must build more than 300 towers along the nation's coasts, some of them in environmentally sensitive areas. The second effort involves restructuring the Coast Guard's field units--tying resources and command authority closer together. This effort represents a major organizational change, and as such, it may be challenging to implement successfully. The third effort, enhancing readiness at the Coast Guard's stations for search and rescue and other missions, remains a work in process. The Deepwater program, which would receive $966 million under the budget request, appears to merit the most ongoing attention. GAO reviews of this program have shown that the Coast Guard clearly needs new or upgraded assets, but the Coast Guard's contracting approach carries a number of inherent risks that, left unaddressed, could lead to spiraling costs and slipped schedules. The Coast Guard is taking some action in this regard, but GAO continues to regard this approach as carrying substantial risk. Some expansion of cost and slippage in schedule has already occurred. |
Civil aviation in the United States can be generally divided into two broad categories—general aviation and commercial aviation. General aviation comprises all aviation activities other than military and commercial airlines. All civilian students are trained in the general aviation sector until they are hired as airline pilots. Commercial aviation generally refers to businesses that carry passengers or cargo for hire or compensation. To operate as a commercial airline, a business must have an airline operating certificate issued by FAA, based on federal aviation regulations, which is determined by the type of commercial service being provided. Airlines that provide scheduled commercial service are often grouped into two categories. Mainline airlines are the traditional large airlines that provide domestic and international passenger service on larger aircraft such as American Airlines or Delta Airlines. Regional airlines, such as Mesa and Piedmont Airlines, also provide domestic and international passenger service, generally using aircraft with fewer than 90 seats and serving smaller airports. The international service that regional airlines provide is confined to border markets in Canada, Mexico, and the Caribbean. More than 13,000 regional airline flights operate daily, which represents more than half of the number of U.S. domestic flights. As the federal agency responsible for regulating the safety of civil aviation in the United States, FAA is responsible for the administration of pilot certification (licensing) and conducting safety oversight of pilot training. Regulations for pilot certification and training are found in three different parts of the Federal Aviation Regulations—Parts 61, 141, and 142. All pilots are subject to a series of certification requirements established by FAA, but the requirements vary depending on the type of training environment. Part 61 recognizes six basic types of pilot certification: student, sport, recreational, private, commercial, and airline transport. Part 61 also establishes the core training requirements for each pilot certification, which describes the eligibility requirements, aeronautical knowledge and flight proficiency standards, and the required flight hours (see table 1). Pilot training can be provided to students by flight instructors under Part 61. Part 141 outlines the specified personnel, aircraft, facilities, curriculum, and other operating requirements that approved pilot training organizations (schools) must meet in order to hold an operating certificate from FAA. Part 142 outlines specific requirements for training centers that primarily relate to the advanced training provided to pilots by employers, such as airlines. Our report focuses on the initial pilot training that students are provided until they are hired as airline pilots; thus, advanced training for pilots is not within the scope of our study. To obtain a private, commercial, or airline transport pilot certificate from FAA to perform various aviation activities, individuals typically have to successfully complete pilot training and pass the following two FAA tests for each pilot certificate and rating obtained: A knowledge test assesses applicants’ understanding of the aeronautical knowledge areas required for a specific certificate or rating and can be administered in written form or by a computer. A practical test consists of a flight test and an oral examination. The flight test assesses applicants’ knowledge of the areas of operations of an aircraft and the ability to demonstrate the maneuvers in an aircraft while in flight. The oral examination is conducted by having an applicant respond to random questions related to aviation knowledge and aircraft operations before, during, and after the flight test, and typically lasts between 1 and 2 hours. To become a certified commercial pilot, which is currently the minimum requirement for being hired by an airline as a first officer, individuals also must undergo several steps of pilot training and certification in accordance with FAA regulations. Once cleared by the medical examination, students obtain a medical certificate and a student pilot certificate from FAA. Figure 1 shows the typical progression of training and certifications required to become an airline pilot. Figure 2 shows examples of the progression from single-engine trainer, to multi-engine (i.e., jet) trainer—used by some pilot schools to provide students with the multi-engine rating—to the much larger, faster jet used by regional airlines. Once commercial pilots complete the process of initial training, they are qualified to apply for a first officer pilot position at an airline. Entry-level positions are typically at regional, and not mainline, airlines; mainline airlines typically draw from regional airlines for their pilots. If hired, pilots must complete the airline’s new hire training, which consists of indoctrination, ground and aircraft systems, simulator training, and the initial operating experience, wherein the pilot applies what they learn in the previous training phases. The airline submits these training programs for approval by the FAA to ensure they meet Part 121 requirements. As part of its oversight responsibility, FAA has safeguards in place to ensure that pilot applicants are provided the necessary training and undergo complete and thorough pilot certification examinations. The National Program Guidelines (NPG), initiated in 1985, are oversight policy guidelines developed annually by FAA for its eight regional offices and their associated local district offices for oversight of pilot schools, pilot examiners, and flight instructors. The NPG identifies required inspections and optional inspections. As part of this oversight process, FAA uses the Program Tracking and Reporting Subsystem (PTRS) for scheduling and recording inspection records and findings for NPG inspections of pilot schools, flight instructors, and pilot examiners. Additionally, FAA uses the Enforcement Information System (EIS) for tracking and reporting information about any enforcement actions the agency takes for statutory or regulatory violations. As a member of the International Civil Aviation Organization (ICAO), the United States conforms to international standards and recommended practices for pilot training and certification. ICAO is the international body that, among other things, promulgates international standards and recommended practices in an effort to harmonize global aviation standards. These standards and recommended practices are developed to ensure that civil aviation throughout the world is safe and secure. ICAO has no enforcement powers and only establishes recommended standards and guidelines, e.g., licensing requirements for flight crew personnel, including pilots. Therefore, ICAO members (known as contracting states) decide whether to incorporate the standards and recommended practices into national laws or aviation regulations. The roughly 3,400 U.S. pilot schools can be divided into three categories: (1) non-collegiate flight instructor-based schools, (2) non-collegiate vocational pilot schools, and (3) collegiate aviation schools. The training provided by the school varies in the minimum requirements for the flight and ground school hours required for each certification level, level of oversight provided by FAA, and level of educational instruction being provided by type of school. However, all student pilots have to successfully complete ground and flight training and pass the same knowledge and practical tests prior to receiving a pilot certificate from FAA. Nevertheless, there is no consensus and little empirical evidence on how the different pilot training schools compare in preparing professional pilots for the commercial airline industry. In addition, modern aircraft used by regional airlines have evolved and the operational demands have increased on pilots in high-altitude and complex airline operations; yet, U.S. pilot training requirements for certification of commercial pilots were last revised in 1997. For example, it is possible for an individual to obtain all levels of pilot certifications (i.e., private, commercial, and airline transport) in a general aviation flying environment in a single-engine aircraft. However, in order to qualify to be hired by an airline, a commercial pilot would also need to obtain instrument and multi-engine ratings. Some stakeholders, including representatives of regional airlines, we interviewed said the current training regulations for commercial pilots should be revised to incorporate additional training that would improve the competency of entry-level first officer applicants. FAA has initiated several efforts to address issues related to pilot training and certification testing. Recent legislation requires that FAA develop regulations increasing pilot certification requirements for all airline pilots. Approximately 3,400 pilot schools exist in the United States and the most basic difference among the types of schools is the training environment provided to the students. For the reporting purposes of our study, we divided them into three categories: (1) non-collegiate flight instructor- based schools, (2) non-collegiate vocational pilot schools, and (3) collegiate aviation schools. Non-collegiate flight instructor-based schools (Part 61). Pilot training conducted under Part 61 regulations is often provided by an individual, for-hire flight instructor who can operate independently as a single-instructor school at a local airport with a single aircraft on which to train students. Other flight instructor-based schools operate as a more traditional training school with a small, physical facility located at a particular airport. These schools are the most common type (see fig. 3). The majority of students that complete training in non-collegiate, flight instructor-based schools are generally interested in recreational flying, although most commercial pilots in the United States also undertake this type of training as the initial path toward becoming an airline pilot. Flight instructor-based schools offer flexible training environments to meet specific students’ needs as long as they pass the final tests. These schools are not subject to direct FAA oversight beyond the initial certification and subsequent renewal of the flight instructor’s certificate. However, flight instructors may be inspected by FAA when a triggering event occurs regarding the training being provided, such as being involved in an aircraft accident. Non-collegiate vocational pilot schools (Part 141). Vocational schools elect to apply for an operating certificate from FAA to provide pilot training under Part 141 regulations. Part 141 regulations require these schools to meet prescribed standards with respect to training equipment, facilities, student records, personnel, and curriculums. Vocational schools must have structured and formalized programs and have their detailed training course outlines or curriculums approved by FAA. Curriculums can vary in content, but FAA provides fundamental core training guidelines that must be followed within the curriculum for the school to receive a certificate. These schools do not allow the flexibility of flight instructor-based schools as the training sequence outlined in the curriculum cannot be altered. FAA requires annual inspections of these schools, unlike flight instructor-based schools. Collegiate aviation schools (Part 61 or Part 141). Pilot training is also provided through 2- and 4-year colleges and universities, which typically offer an undergraduate aviation-based degree along with the pilot certificates and ratings necessary to become a commercial pilot. In general, most of the collegiate aviation schools provide pilot training under a Part 141 certificate, although they can provide training under Part 61. Collegiate schools that provide training under Part 61 regulations generally offer similar structured, curriculum-based training as collegiate schools with a Part 141 certificate. Figure 3 displays how each of the three types of pilots schools are dispersed across the United States. For the most part, all pilot schools must provide training that includes both classroom and flight training. Classroom training, or ground school, provides students with the required aeronautical knowledge and cognitive skills necessary to perform the tasks required to become a pilot. Flight training focuses on learning how to manipulate the controls of an airplane and make it perform certain maneuvers. Regardless of the type of school, flight instructors must teach students by demonstrating and explaining, on the ground and in the air, the basic principles of flight (e.g., airspace, aerodynamics, weather factors, and Federal Aviation Regulations). The number of training flight hours required for pilot certification varies by the aviation regulations being used to provide pilot training. Because training under Part 141 regulations requires a school to use an FAA-approved curriculum, fewer hours of actual flight training are required than under Part 61. Figure 4 shows the differences in general characteristics of the types of U.S. pilot schools. FAA regulations do not prescribe the entry requirements, selection criteria, and screening procedures for students seeking entry into U.S. pilot schools, and as a result, they could vary considerably among schools. In general, pilot schools admit those students who can pay for the training; however, FAA sets a minimum age requirement for each pilot certification and requires a current FAA medical certificate and that every pilot student is able to read, speak, write, and understand the English language. FAA’s long-standing requirement for English proficiency complies with the 2008 ICAO standard that all private, commercial, or airline transport pilots who operate internationally have a pilot certificate with the level of English language proficiency. If a person is determined to be proficient during the FAA practical test for pilot certification, FAA issues pilot certificates with an “English Proficient” endorsement to attest that the pilot meets the ICAO standard. One of the distinctive characteristics of collegiate schools is that they are generally accredited academic programs, which recognizes a level of program quality. However, a recently-created organization, the Flight School Association of North America, implemented an accreditation program in August 2011 for non-collegiate pilot schools intended to establish an educational quality standard. According to association officials, accrediting non-collegiate pilot schools will help to level the playing field with the collegiate schools and assist consumers in comparing pilot schools. Regardless of the type of pilot schools that students attend, once training has been completed pilot candidates must pass the same knowledge and practical tests to obtain a pilot certificate. FAA uses a multiple-choice, knowledge test to measure the extent to which applicants for FAA pilot certificates have mastered the required aeronautical knowledge areas provided in ground school. To pass, applicants must achieve an overall score of 70 percent or higher. However, concerns have been voiced by some aviation stakeholders related to whether the current knowledge test actually requires students to learn the material, as opposed to simply studying sample test questions from publicly available sources. Literature related to pilot certification and training issues and some aviation stakeholders have pointed out that FAA testing is generally based on rote memorization. They stated that this encourages instructors and students to focus on memorizing test questions to pass the required FAA knowledge test, rather than developing a true understanding of the material. In 2004, the National Aeronautics and Space Administration (NASA) published a study on FAA’s pilot knowledge tests. NASA found that many applicants completed the test in far less time than would be required for the average human to even read the questions and answers on the test—indicating that students had memorized the questions and answers—which raises concerns about the extent to which students actually mastered the material. There is little empirical research that has been conducted comparing the extent to which different types of pilot schools prepare pilots for the commercial airline industry. We reviewed a pilot source study, published in 2010, authored by professors from several accredited collegiate aviation schools that researched the impact of collegiate aviation training on preparing students to be regional airline first officers. The researchers analyzed data on how 2,156 new-hire pilots performed in the training programs of six regional airlines from 2005 through 2009. The study found that the new-hire first officer pilots with the highest rate of success—in terms of the amount of extra training needed to complete training tasks and fewer number of times tasks were not completed—in the airline training: (1) were graduates of accredited college flight degree programs, (2) had experience as flight instructors, and (3) had accrued between 500 and 1,000 flight hours. We reviewed the study and determined that, while statistically significant, the results of this research showed small differences in the correlations that supported these three conclusions. Phase II of the study, completed in early 2011, expanded the research to include testing of multiple variables with the same dataset, but the researchers did not report the results of the tests they conducted of their data. Phase III is currently underway and is expanding the current dataset to include more than six regional airlines. Phase IV will include more detailed background data on the newly hired pilots to determine relationship factors. Other than the 2010 pilot source study, our literature review found that little other academic study exists and there is no consensus about how well the different types of pilot schools prepare commercial pilots for airline operations. We received a variety of perspectives from industry representatives and some anecdotal information that suggested that one major benefit to completing a structured training program that is the training provides better aeronautical knowledge (ground school) than an unstructured learning environment. Officials from two regional airlines, two collegiate aviation schools, and four industry associations with whom we spoke generally agreed with the results of the initial phase of the pilot source study. Officials from the Regional Airline Association (RAA) told us that the broad-based curriculum used by vocational and collegiate aviation schools is the better method for preparing a person for a professional airline career. Furthermore, officials from 6 of the 12 industry associations and one mainline airline we interviewed considered the quality of education at many of the collegiate aviation schools to be the highest level of civil aviation pilot training because collegiate schools are designed to produce professional pilots for airlines, rather than for other aviation jobs. Collegiate curriculums also cover a broad range of areas above FAA minimum training requirements. In addition, representatives from all but one of the regional airlines we interviewed generally told us they strongly preferred, but do not require, first officer candidates trained in collegiate aviation schools because they perform better in their airline’s training program when hired. Due to limited screening, training structure, and variability of educational content, according to some of the regional airline officials, flight instructor-based schools are less likely to produce first officers that are prepared immediately upon completing the training to enter the workforce and succeed in an airline environment. On the other hand, stakeholders from 4 of the 12 industry associations pointed out the large number of pilots that matriculate through flight instructor-based schools and many are hired by regional airlines without any performance issues. Representatives from two of the regional airlines indicated that the professional pilot experience gained through commercial aviation positions after completing pilot training is more important than the type of pilot school attended. Several industry stakeholders have stated that current training requirements for commercial pilots are not aligned with today’s commercial airline environment. FAA requires the same initial training for a pilot hired as a first officer of a regional airline carrying passengers as it does for a pilot hired to fly for a commercial non-airline operation, for example crop dusting. The Air Line Pilots Association (ALPA) has suggested that FAA revise the regulations to make a clear distinction between training and certification requirements for airline operations and those for other types of commercial operations. ALPA contends the regulations were developed in an era in which commercial pilots were hired by airlines in small, slow, propeller-driven aircraft or as flight engineers on jet-powered aircraft. It would traditionally take several years and thousands of flight hours before these pilots were given an opportunity as a first officer of jet transports. However, according to ALPA, it is not uncommon today for newly hired pilots to be hired directly into airline training programs to become first officers of high-altitude, complex aircraft carrying 50 or more passengers, the type of aircraft that warrants pilots with more knowledge and greater skills than the new-hire airline pilots of the past. Officials from two industry associations and eight regional airlines advocated for a separate pilot certification track with additional training requirements specific to being an airline pilot. Because airline pilots are responsible for the safety of the flying public, according to ALPA, it is reasonable that they should be held to a higher standard of competency, knowledge, and training than pilots in other flight operations. Additionally, requirements for a commercial pilot certificate do not emphasize training in some areas—like decision-making and using modern technologies—that are directly related to the airline pilot profession. According to FAA and other stakeholders, the regulations regarding ground school and flight training, as well as the test standards for a commercial pilot certificate, generally emphasize the mastery of maneuvers and individual tasks to determine competence. The emphasis is on development of motor skills to satisfactorily accomplish individual maneuvers—whereas only limited emphasis is placed on decision-making—unlike in scenario-based training that emphasizes improving operational experience. In addition to traditional skills of flying, navigating, and communicating, pilots in today’s newer aircraft have to manage automation, information displays, and other new technologies. According to the FAA Industry Training Standards’ guidance material for the commercial pilot certificate, a growing number of pilots are being hired by regional airlines as first officers to operate aircraft with these advanced avionics and systems. While these pilots may gain flying experience and spend years building flight time in commercial non-airline jobs or as flight instructors, this experience may be accumulated in smaller, slower, and less advanced aircraft. Modern aircraft offer advanced avionics and performance capabilities and many of these new aircraft travel faster and further than older generation commercial aircraft. While generally considered enhancements, these modern technologies require increased technical knowledge of newer systems and avionics and new skills for managing automation and computerized flight and navigation systems. According to literature, as airspace complexity and air traffic density increase, airline pilots must have increased situational awareness, understand risk assessment, and have a complete understanding of managing the automation of the aircraft. The current training requirements and testing for a commercial pilot certificate do not emphasize the development of these skills. Representatives from 10 regional airlines, 4 pilot schools, and 2 industry associations we interviewed said the current training regulations for commercial pilots should be revised to incorporate additional training requirements that would improve the performance capabilities of the first officer applicants that seek employment at airlines, such as exposure to advanced jet concepts and simulation, aircraft unusual attitude (i.e., upset and stall recovery), flight crew coordination and environment, and scenario-based training. However, when pilots are hired by airlines, these types of training are provided by the airline to ensure that pilots are adequately competent in these and other advanced training areas—some required by FAA for airline operations. For example, FAA regulations for airline operations require that all pilots are provided crew resource management training as part of the airline’s new hire and recurrent training programs. According to ALPA, the lack of specific training requirements to be a commercial airline pilot results in a wide range of initial training experiences, not all of which are well suited for the commercial airline industry. To compensate, some regional airlines, such as SkyWest Airlines, use various flight training devices to screen pilots during the hiring process to gauge their piloting skills (see fig. 5). However, if additional training is required by FAA for pilots prior to being hired by an airline, the students would likely be responsible for the extra costs involved and would add to the total costs of pilot training borne by the student. (For more information on the costs associated with pilot schools, see app. II.) The industry concerns about current training regulations for commercial pilots and incorporating additional initial training requirements to improve first officer applicants’ performance capabilities could be addressed by the Airline Safety and Federal Aviation Administration Extension Act of 2010 for all airline pilots. Currently, while a captain for a commercial airline is required to hold an airline transport pilot certificate—the highest level of pilot certification and requiring the highest number of total flight hours—a first officer is required to hold only a commercial pilot certificate, i.e., requiring a minimum of 250 flight hours. However, the recent law will require that each pilot (captain and first officer) must have an airline transport pilot certificate, which currently requires a minimum of 1,500 total flight hours. Individuals interested in becoming a first officer for a regional airline generally complete training from pilot schools with a commercial pilot certificate and possess about 300 to 500 flight hours. The 2010 law directs FAA to conduct a rulemaking and effect the changes no later than August 2013. According to FAA, it will issue a notice of proposed rulemaking regarding the increased requirements in the fall of 2011. Representatives of the regional airlines we interviewed were concerned this legislation will reduce airlines’ hiring flexibility and make it harder to find qualified first officers that possess an airline transport pilot certificate. As another potentially relevant factor, the law stated that the FAA Administrator may allow specific academic training courses—beyond the additional courses required by the legislation to raise the minimum requirements for the airline transport pilot certificate—to be credited in lieu of flight hours needed to obtain an airline transport pilot certificate. According to FAA’s First Officer Qualifications Aviation Rulemaking Committee Report, well-structured training programs that feature integrated academic content and flight experience optimize the pilot learning process, and the committee supported new, higher-level minimum certification requirement for first officers. To support the concept that academic training courses should be credited for some of the additional total flight hours, the report outlined a system for crediting academic training courses based on the sources of pilot training, e.g., vocational pilot schools, collegiate aviation schools, or military. The Coalition of Airline Pilots Associations (CAPA) and National Air Disaster Alliance-Foundation presented dissenting opinions to this approach in the report and suggested that academic courses, while necessary, should not replace an increase in total flight hours required in the law. Many of the collegiate aviation schools provide specialized training in a flight simulation training device using realistic scenarios, including some coursework and advanced flight training in jet aircraft systems and airline operational procedures. For example, Embry-Riddle Aeronautical University’s Aeronautical Science degree program is designed to prepare graduates for a career as a professional pilot in multi-crewmember, jet aircraft. Courses include communication theory and skills, aircraft turbine engines, crew resource management, aviation weather, jet transport systems, and optional upset recovery training. Officials who represented 10 of the 24 regional airlines we interviewed listed some of these types of courses as examples that FAA could require as part of pilot schools’ training curriculums that would improve the skill level and competency of applicants seeking to be hired as first officers. Some collegiate aviation schools—and some large flight instructor-based and vocational pilot schools—have developed relationships with the training departments of some regional airlines, referred to as bridge programs, in order to qualify their students with advanced training procedures involving regional jet simulators. Students enrolled in a bridge program will train on flight simulators and other flight training devices, become familiar with regional jets, and often learn airline-specific operational procedures in a multi-crew environment. The airlines, in turn, will offer interviews to students from those programs that successfully complete the school’s curriculum and earn all of the pilot certification credentials. A bridge program is designed to bridge the gap between the general aviation training experience in small single- and multiengine aircraft and a professional airline career. For example, Arizona State University’s aviation degree program has established a bridge agreement with a regional airline, Mesa Airlines, which allows the students to train in full-motion simulators of the regional jets that Mesa operates (see fig. 7). FAA has initiated several efforts to address issues related to pilot certification testing and training. In 2010, FAA began revising the repository set of questions that it uses to create its knowledge tests for pilot certification. FAA found that a significant percentage of applicants tested on the new questions failed the test compared to those that took the test with the previous questions. FAA plans to cooperate with industry representatives on future changes to the knowledge test questions and would likely implement any further changes over the next 2 years. Additionally, according to FAA officials, FAA has developed other plans related to improving pilot training to be implemented during fiscal year 2012. FAA plans to establish a government and industry working group during fiscal year 2012 to address issues related to pilot certification testing standards and training. The group will make recommendations to FAA on a variety of issues, including knowledge content, technical information related to pilot knowledge and practical tests, computer testing supplements, knowledge test guides, and practical test standards. FAA is currently updating its national guidance and associated handbook for FAA inspectors on the recurrent training for flight instructors that is conducted by the aviation industry by providing refresher training courses. As stated earlier, part of the oversight for flight instructor-based schools is the subsequent renewal of the flight instructor’s certificate every 24 months. The refresher training is designed to keep flight instructors informed of changes to flight training and is one of several methods by which a flight instructor may renew a flight instructor certificate. FAA is also updating its guidance on the review process for the 24- month certification renewal for pilots and flight instructors. According to FAA, the current guidance is outdated and the revised version will provide more detailed guidance for the renewal review process, updated terminology and references, and reorganized review content. FAA plans to make changes to the practical test standards (i.e., guidance used for conducting the flight test portion of a practical test) to incorporate required testing for runway incursions. FAA’s goal with the revised standards is to reduce runway incursions by 2 percent annually from the current level. Currently, the part of the practical test standards for evaluating a student’s knowledge of runway incursions is not required or specifically outlined. However, changes to the practical test standards include labeling runway incursions as a required testing task that includes specific procedures to be conducted during the test. FAA has also recently initiated efforts to partner with the aviation academic community through a 5-year plan initiative, working through AABI and the University Aviation Association (UAA), to leverage academic expertise and develop best practices for improving all pilot training. The goal is to identify specific non-regulatory measures that can be used to improve training and reduce accidents. Other efforts to improve pilot training have generally focused on advanced training for pilots already employed at airlines—not on initial pilot training for students at pilot schools. Over time, U.S. pilot schools have become the primary source for producing pilots for the airline industry. In May 2011, in response to comments to a January 2009 notice of proposed rulemaking and requirements in the Airline Safety and Federal Aviation Administration Extension Act of 2010, FAA issued a supplemental notice of proposed rulemaking that would require existing airline pilots to train as a complete flight crew and coordinate their actions through crew resource management and scenario-based training, among other things. The rule would require airlines to provide ground and flight training to all existing airline pilots in the recognition and avoidance of stalls, recovery from stalls, recognition and avoidance of aircraft upset, and the proper techniques to recover from upset—all related factors evidenced in the Colgan Air crash in 2009—as required in the act. Additionally in May 2011, NTSB issued a series of recommendations to FAA related to first officer leadership and communications training. One recommendation to FAA, to which FAA has not yet formally responded, was that role-playing or simulator-based exercises that teach first officers to assertively voice concerns and that teach captains to develop a leadership style that supports first officer assertiveness be included as part of the already required crew resource management training for airline operations. These proposals are designed to enhance the training programs of airlines rather than the training requirements for pilot students, however, some of these ideas could be applied to the initial training for students seeking commercial airline careers. In addition, other aviation industry efforts are being developed that focus on improving all pilot training. For instance, one aviation organization has proposed the development of a global professional pilot certification to bridge the competency gap between pilot certification and being employed. The professional pilot certification would be based on a set of standards to ensure a newly-certified pilot has the knowledge required by the standards to enter the pilot profession. A not-for-profit association of pilot training professionals has proposed the development of an independent, international clearinghouse for pilot training best practices. International Air Transport Association (IATA) launched the Training and Qualification Initiative in 2007 to update and modernize the training of existing and future pilots. The initiative’s goal is to make pilot training more accurately reflective of the needs of flight deck operational procedures. Aviation stakeholders, such as Boeing and Flight Safety International, are offering a first officer’s course to bridge the skill and competency gap between training received at pilot schools and being employed as a commercial airline pilot. Similar to FAA, EASA provides the regulatory framework for oversight of European countries’ national aviation authorities, which carry out the requirements for pilot licensing and training. Creation of EASA is the centerpiece of the European Union’s strategy for developing one level of aviation safety across Europe. EASA establishes common safety and environmental regulations and standards, and monitors the implementation of standards through inspections in its member states under authorization of the European Union. The United States and Europe provide a good comparison for aviation licensing as they offer equivalent pilot certificates, but they provide training for these certificates in different ways, as we describe below. Both FAA and EASA meet the ICAO standards for pilot certification and offer the private pilot, commercial pilot, and the airline transport pilot certifications (see table 2). An FAA pilot certificate is the equivalent of an EASA pilot license (we will only refer to a certificate—not license—for the simplicity of reporting). In particular, for commercial pilots who may be hired as airline pilots, FAA has less exhaustive ground school requirements and practical testing requirements for pilot certification than EASA. Pilot certification in the United States emphasizes piloting skills and, thus, concentrates more on flight training. While FAA pilot certification regulations require some ground school instruction, the regulations do not emphasize the need for formalized training. As previously stated, FAA’s pilot training regulations require that a person applying for a private and commercial certificate must receive and log ground training from a flight instructor or complete a home-study course on the applicable ground school areas. Part 61 regulations do not specify a required number of training hours for ground school for any pilot certificate—although, they require that minimum knowledge subject areas are covered during training—but do specify a minimum number of actual flight hours. On the other hand, the European system emphasizes and requires ground school training hours along with actual flight training hours for pilot certification. EASA has no minimum number of training hours for ground school for a private certificate similar to FAA, but requires a minimum of 350 hours of instruction for the commercial certificate and 750 hours for the airline transport certificate. The United States and Europe also have differing approaches to the pilot certification knowledge and practical testing. While FAA uses one multiple-choice knowledge test and requires an overall score of 70 percent or higher to pass, EASA uses multiple tests for each certificate while utilizing multiple-choice and open-ended questions. Pilot test applicants must pass with at least a 75 percent pass score for all questions related to each ground school area being tested. European officials told us that the quantity and variety of tests given ensure students have a true understanding and application of the aeronautical knowledge provided in ground school necessary for being a skilled pilot. Also, FAA’s pilot certification system places greater emphasis on practical testing on various types of equipment. In addition to the required number of hours of ground instruction and tests, the United States and Europe differ on the type of aircraft on which pilots are trained and tested. As stated earlier, much of U.S. pilot training takes place only in a single-pilot, single-engine airplane, which is not reflective of today’s modern jet aircraft and the training needs of an airline pilot, according to aviation stakeholders. FAA does not require airline transport pilot applicants to show proficiency in a multi-pilot airplane, either as a captain or first officer. However, FAA officials stated that airlines are required to provide that training to their pilots after being hired and before transporting passengers. EASA, however, requires airline transport pilot applicants to show proficiency in operating as a first officer on multi-pilot, multi-engine planes and there is greater emphasis in Europe for training in a multi-crew environment. The U.S. pilot training system is based on the traditional aviation philosophy, which relies on a modular (building block) approach. This model requires pilots to obtain different certifications through building the competencies and experience by accruing a certain level of aeronautical knowledge (ground school training) and minimum number of flight hours. In a modular training approach, students are provided different training modules that are independent of each other. Pilot certification in the United States is based on these building blocks whereby each level of pilot certificate builds upon the knowledge and experience gained at the previous level. Thus, commercial airline pilots are trained through various levels of pilot certification by meeting ground school and flight training requirements at each level, and then must gain actual flying experience through various aviation jobs to build the necessary hours to be hired by a regional airline. Europe has two training systems for acquiring pilot certificates and ratings—the step-by-step modular training approach and an integrated training program approach (ab initio). Similar to the United States, the modular training programs train students through modular courses and in- flight training at their own pace. This approach is usually pursued part- time or on a non-continuous basis, normally focusing on flying solo, and starts with a private pilot certificate. The integrated training approach requires student pilots to attend an approved flight training organization for an approved full-time course and emphasizes a multi-crew environment. European officials we spoke with explained that the integrated approach is specifically directed towards individuals interested in becoming an airline pilot. The fundamental philosophy of ab initio training—which is also the traditional approach by which U.S. military pilots are trained—is the belief that a competent, proficient airline pilot can be trained to airline standards with as little as 350 hours of flight time, provided the student is immersed in a properly designed aviation curriculum from the outset. In other words, the training approach is not based on the quantity of hours of training, but rather on the quality of the training to better enable an individual to achieve competency. A few U.S. pilot schools offer an ab initio training program; however, they are generally provided by universities and require the student to take part in a 4-year program, generally a longer timeframe than it takes to complete a European ab initio program administered by an airline sponsored school. U.S. ab initio programs train pilots for positions with regional or commuter airlines, whereas in Europe, ab initio training is more specifically for pilots being training for mainline airlines. Another major difference between the United States and European countries is the implementation by EASA of regulations to support the multi-crew pilot license—an ICAO-approved, alternative pilot-training and certification concept specifically geared toward training commercial airline pilots. According to ICAO, a total of 32 ICAO member states have regulations in place for the multi-crew pilot license. However, currently only 13 of the 190 ICAO members (7 percent) have approved training organizations to conduct training for the multi-crew pilot license, with different training schemes in progress. The training is designed to focus on mastering the competencies specific for becoming an airline first officer. The multi-crew pilot license, established by ICAO, requires at least 240 hours of total flight training and is comprised of actual flight time and simulation time for meeting competency milestones. However, the license is not a general pilot certificate and must be granted to an individual for a specific aircraft type and limits an individual as a first officer for a specific airline. FAA has not developed regulations for a multi-crew pilot certification, and there are differing views on its usefulness and necessity in the United States. FAA officials said they have been studying the feasibility of implementing the necessary regulations for U.S.-based commercial airlines, but they also indicated that U.S. airlines have not publicly shown interest in a multi-crew pilot certification due to the availability of a broad pool of commercial and airline transport pilots in the United States. Representatives from three regional airlines and one industry association told us that, with the number of furloughed pilots as a result of the economic downturn in 2008, application of the multi-crew pilot certification is not needed in the United States and would be too restrictive in nature. The certification would limit pilots to being first officers, limit them to a specific aircraft type, and not allow them to transfer to other airlines. Traditional pilot certificates require more training hours, but do not include such restrictions. Part of the reason European countries and airlines have adopted the ab initio approach is to address a shortage of qualified airline pilots. Historically, U.S. airlines have recruited experienced pilots from the robust U.S. general aviation community and the U.S. military. The United States also has significantly more pilot schools than Europe. Conversely, Europe has not benefited from a steady stream of military pilots or a thriving general aviation sector. As a result, European schools are mostly focused on producing commercial airline pilots. European countries and airlines have used the ab initio training model and multi-crew pilot license to increase the number of available airline pilots. At times, in response to pilot shortages, European airlines have funded the training for their pilot candidates. After the screening process, many student pilots in the European countries we visited are provided training by airline sponsorship or an agreement for employment with an airline. Examples of airlines that follow this practice include Lufthansa and Air France, where students are offered the training as part of a partial sponsorship program, wherein candidates are required to pay a small portion of the training costs upfront (Lufthansa provides loans to students to cover this cost). Once training is completed, Lufthansa and Air France enter into an employment contract with the candidate, and he or she repays the loans via a lower initial salary. Similarly, while British Airways does not sponsor students as fully as it has in the past, when it does, the students pay for their training through their salary once they begin working for the airline. On the other hand, KLM airline does not sponsor candidates; however, it does partially fund an insurance policy to help cover the default risks for the banks that loan the money to the students for training as a result of early termination due to poor performance, failed medical examination, and other unforeseen circumstances. If the insurance policy is executed, students are contractually obligated to cease pursuit of a pilot career. U.S. airlines do not sponsor students for initial pilot training. As mentioned previously, pilot training in the United States is provided to individuals based on the self-sponsored concept—if they can pay, they can train. Students do not need to meet certain qualifications to train. However, once hired, all U.S. commercial airlines are required to provide the advanced training for their pilots. Several officials stated that most pilot students in the United States are not interested in becoming commercial airline pilots and pursue training to fly recreationally. According to literature, most pilot schools in the United States conduct little screening of students that apply, whereas European schools routinely use comprehensive mechanisms to identify the most qualified students as to provide the best pilot candidates to their sponsoring airlines. The candidate screening process is generally the same across Europe and includes several interviews, various psychological tests, and scenario-based testing. According to some European officials, pilot schools in Europe focus on selection procedures and aptitude screening. According to officials at European airlines, basing selection on fixed standards instead of selecting candidates influenced by commercial pressure assures airlines that they are training a qualified pilot. Officials at these European airlines and the pilot schools they sponsor noted that pilot training in Europe is very expensive, and selecting the right pilot candidates is important because of safety reasons and the upfront investment for the company. In the United States, however, the most intensive screening process occurs when pilots seek employment with airlines. The airlines independently assess candidates’ work experience, technical, and non-technical skills before hiring. (For more information on the demand for and supply of pilots, see app. II.) According to some aviation stakeholders, U.S. pilot schools would lose revenue if they screened or selected students as is done in Europe and would be challenged under privacy and anti-discrimination laws. For fiscal year 2010, our analysis of FAA’s PTRS data found that FAA completed about 78 percent of the required inspections for the 545 pilot schools with a Part 141 certificate (vocational schools and most collegiate aviation schools). As part of its oversight role, FAA monitors pilot schools with a Part 141 certificate to ensure that they meet the required safety and training regulations. To fulfill NPG requirements, FAA requires its inspectors to conduct on-site inspections of each of these schools at least once a year. The inspections focus on five areas related to pilot school operations and the airworthiness of training aircraft. Under operations, an inspection must cover the school’s facility and student records. Under airworthiness, an inspection must cover the pilot school facility, compliance with FAA’s airworthiness directives, and a Part 141 ramp check. Inspectors enter the details and results of their monitoring in FAA’s PTRS data system. In reviewing PTRS data for fiscal year 2010, we found that FAA completed the majority of the required inspections of schools with a Part 141 certificate—that is, the inspections covered all five inspection areas during the year. However, we found that for 118 of the 545 schools that were inspected during the fiscal year, all of the five inspection areas were not covered (see table 3). An annual inspection covering all of the inspection areas is important for overseeing these schools because it provides some assurance to FAA that they are meeting the regulatory requirements for providing adequate training to pilot students. We were unable to determine whether the data were missing because they were entered incorrectly into PTRS, or because the inspections did not take place as required. We were also unable to determine the extent inspections were completed in fiscal years prior to 2010 due to limitations in tracking the number of pilot schools that existed in each fiscal year. FAA does not maintain a historical listing of the active pilot schools with a Part 141 certificate for a given fiscal year and, thus, we could not define the universe of pilot schools that was required to be inspected during fiscal years 2006 through 2009. As a result, we could not determine the identity and number of schools that needed to be inspected. While FAA officials recognized that all required NPG inspections are expected to be completed within each fiscal year time frame, they provided several reasons to potentially explain why some of the required inspections are not recorded in PTRS. FAA officials said that inspectors may have conducted the required inspections for some schools but incorrectly entered the details in PTRS. For example, some FAA inspectors may conduct full inspections of schools that cover the five inspection areas, but may enter only two of the five inspection numbers into PTRS. The officials also said that inspectors had additional duties—such as following up on previously identified issues or addressing the need for additional oversight for certain inspection areas—in conducting inspections for some schools, which can make covering all five inspection areas difficult. In addition, many of the part 141 pilot school inspections required by NPG are incorrectly recorded as discretionary inspections in PTRS making it difficult to use PTRS to determine if FAA had conducted all of the required inspections of part 141 pilot schools for a given year. Specifically, 35 percent of the 4,551 part 141 pilot school inspections required by NPG in fiscal year 2010 were listed incorrectly in PTRS as discretionary inspections. Also, 32 percent of the required inspections were listed as discretionary from fiscal year 2006 through 2010. As a result, those inspections would not show up in a list generated in PTRS of required inspections for fiscal year 2010. FAA officials said that this problem is likely caused by the program that populates PTRS with the NPG requirements. When inspections or other sources revealed compliance issues or violations, FAA uses a variety of actions to enforce safety standards and regulatory compliance, such as oral or written counseling, administrative action, legal enforcement action, and referral for criminal prosecution. When an FAA inspector identifies a potential violation, he or she initiates an investigation and if FAA determines that a violation has occurred, FAA has a wide range of options available for addressing it. From fiscal years 2006 through 2010, our analysis of FAA’s EIS data found that FAA initiated 230 enforcement cases against pilot schools with a Part 141 certificate. The majority of these cases resulted from an inspection of a school, but others may have resulted from other sources. During fiscal years 2006 through 2010, FAA used a wide range of enforcement actions against pilot schools with a Part 141 certificate (see fig. 7). No action: FAA can determine that no action is warranted. In 26 of the 230 cases (about 11 percent), enforcement cases were initiated, but no enforcement action was taken. Administrative Actions: In 186 cases (about 81 percent), FAA used administrative actions to address violations. These actions refer not only to warning notices and letters of correction but also informal actions, such as oral or written counseling, which can also be used by inspectors to address an apparent violation, provided that certain criteria are satisfied and the apparent violation is a low safety risk. Enforcement actions: We found that FAA rarely used punitive means such as revoking licenses and assessing penalties against pilot schools with a Part 141 certificate. FAA assessed monetary civil penalties in 12 cases (about 5 percent) for pilot schools with a Part 141 certificate and the sanctions ranged from $500 to $20,000. FAA revoked the operating certificates of schools in 3 cases, or slightly more than one percent. To illustrate the severity of an action that leads to revoking operating certificates, these include knowingly permitting school training aircraft to be used to carry illegal controlled substances or an intentional action to improperly credit training to or graduate students. From fiscal years 2006 through 2010, our analysis of FAA’s PTRS data found that FAA completed 9,016 inspections of pilot examiners, but it is unclear whether FAA met all of its oversight requirements in this area (see fig. 8). FAA uses private individuals or organizations to supplement its workforce and to provide certification activities such as examining and testing of pilot applicants for a fee paid for by the applicant. Known as designees, pilot examiners are generally appointed by FAA’s local district personnel for either 3 years (for an individual) or 5 years (for an organization). As part of its oversight role, FAA requires each pilot examiner to be inspected by FAA inspectors at least once annually and high activity pilot examiners must be inspected at least twice annually, as outlined in the agency’s oversight policy and NPG directives. Additionally, several other circumstances may require an FAA inspector to inspect a pilot examiner, such as noncompliance with the applicable certification policies, an excessively high certification passing rate, or involvement in an accident, incident, or other violation. Although we know the number of inspections conducted for each fiscal year, we could not determine the completion percentage of the required inspections for either the routine annual or additional inspections for high activity or special circumstances for each fiscal year due to limitations in available data for the population of pilot examiners. Although we could not determine the completion percentage of the required inspections for pilot examiners, our analysis of PTRS inspection data showed 1- and 2-year gaps in the oversight of some pilot examiners. For instance, we found 114 pilot examiners with a 1-year gap between inspections, and 11 pilot examiners with a gap of 2 years. This may indicate that required inspections of pilot examiners were not completed by FAA in a given fiscal year or that inspections were unnecessary due to inactivity of the examiners during that year. FAA officials told us that, until recently, FAA had not analyzed the extent to which it has conducted all required pilot examiner inspections on a national level. FAA has previously taken steps to improve oversight of pilot examiners, but still faces issues in this area. In 2005, FAA developed 14 recommendations to improve pilot examiner compliance, 11 of which were implemented. FAA officials told us that the implementation of these recommendations has resulted in improvements in pilot examiner oversight guidance and in the information technology used to oversee pilot examiners for local district offices. They also said that, as a direct result of the recommendations, more of the pilot examiners with poor performance issues have been terminated and oversight of pilot examiners has improved. Nevertheless, FAA officials acknowledged they still face some issues in oversight of pilot examiners, due, in part, to the FAA’s current data systems’ difficulty compiling inspection data at the regional or national level. In September 2010, FAA began developing quarterly assessment reports covering 12-month periods on the oversight of its designees, including pilot examiners, to assist in identifying oversight gaps and potential areas of concerns. We reviewed the quarterly reports that covered July 2009 through March 2011 and found they identified a number of areas of concern regarding the oversight of pilot examiners. For example, some pilot examiners with the highest activity had not been inspected over the previous 12 months. FAA began creating the quarterly assessment reports to better inform management officials at the national office level. For example, in the most recent report provided by FAA officials, five high-activity pilot examiners were identified that performed a total of 1,623 pilot practical tests, but no inspections were conducted for these pilot examiners for the previous 12 months. Conversely, the report found that FAA conducted 218 inspections of the 171 pilot examiners with the lowest testing activity during the same period. Based on this report, FAA’s inspection record is not consistent with the pilot examiners that are responsible for conducting the largest numbers of practical tests, and conducting more oversight on the examiners conducting significantly fewer tests. FAA officials said that the quarterly assessment reports are a temporary way of assessing the extent to which it is conducting all required inspections, but FAA is in the process of developing a new designee management system that it expects to be operational by July 2012. FAA officials told us that the new system is being designed to provide more comprehensive data on designees, including pilot examiners, by combining data that FAA currently maintains in various data systems. Thus, the new system will consolidate the management and oversight functions for designees to allow for more readily available data. The officials also told us that they are in the process of revising oversight policy for designees and improving tools for selecting which pilot examiners to inspect. Unlike oversight of pilot schools with Part 141 certificates and pilot examiners, annual inspections of individual flight instructors (i.e., under Part 61 regulations) are not required by FAA. From fiscal years 2006 through 2010, our analysis of FAA’s PTRS data found that FAA completed 1,761 inspections of flight instructors. Oversight for flight instructors is generally limited to initial and subsequent certification renewal, but additional oversight of flight instructors is conducted as an optional work activity by FAA. However, despite being optional, FAA officials from one local district office told us that their inspectors make this area a priority in their planned work activities for a given fiscal year. According to FAA policy, flight instructor certificates are renewed every 24 months. However, inspections of flight instructors and their training activities should take place on a random basis in the interim, but should be prioritized as a result of, for instance, observations of noncompliance actions made during a pilot school inspection, an instructor or student is involved in an accident or incident, or when an instructor has a student failure rate of 30 percent or greater on FAA’s certification tests. The oversight of flight instructors is important because, like the examiners, this population serves as a gatekeeper for ensuring that pilot students are being properly trained as they seek certification. However, similar to pilot examiners, we could not determine the extent of oversight that FAA provided for the entire flight instructor population or the specific reasons that inspections were conducted during the 5 fiscal years. Similar to pilot schools, when inspections or other sources revealed compliance issues or violations, FAA uses a variety of actions to enforce safety standards and regulatory compliance. For fiscal years 2006 through 2010, our analysis of FAA’s EIS data found 178 cases against flight instructors (see fig. 9). No action: FAA can determine that no action is warranted. In 38 of the 178 cases (about 21 percent), enforcement cases were initiated but no enforcement action was taken. Administrative Actions: In 109 cases (about 61 percent), FAA used administrative actions to address violations. These actions refer not only to warning notices and letters of correction but also informal actions, such as oral or written counseling, which can also be used by inspectors to address an apparent violation. Enforcement actions: We found that FAA rarely used punitive means such as suspending or revoking licenses and assessing penalties against flight instructors. FAA suspended licenses in 9 cases (about 5 percent) and revoked licenses in 16 cases (about 9 percent). FAA also assessed monetary civil penalties in three cases (about 2 percent) for flight instructors. In addition to using PTRS and EIS, FAA inspectors develop their work plan using another somewhat limited oversight tool—the Safety Performance Analysis System (SPAS)—which provides data access and analysis on pilot schools, pilot examiners, and flight instructors. The SPAS system is a data analysis application that monitors performance measures and calls attention to trends. SPAS builds on inspection results and other data and is intended to assist FAA’s local district offices in applying their limited inspection resources to those entities and areas that pose the greatest risk to aviation safety. For example, when particular inspection tasks warrant attention, SPAS will “flag” an advisory notification to a FAA inspector and prompt the inspector to look into the situation, e.g., a flight instructor with a high rate of student failure on practical tests. While SPAS may be useful at the local level, it does not have the capability to perform national level rollup analysis. Thus, FAA does not have the ability to measure FAA’s annual performance in meeting the NPG inspection requirements for pilot schools and pilot examiners based on PTRS inspection data, or to make risk-based, data- driven decisions about the scope of its discretionary, planned work activities that include flight instructors. As a result, FAA’s national office cannot readily access comprehensive inspection completion data and determine the level of oversight its field staff is providing for pilot schools and pilot examiners. Public and media concerns about aviation safety escalated as a result of the Colgan crash in early 2009, and Congress and FAA have taken steps to improve aviation safety by making revisions to the training requirements that airlines must provide for commercial airline pilots. Our analysis indicates that FAA has an opportunity to ensure that the initial pilot training process for producing pilots’ commercial certificates is still relevant for the necessary knowledge and skills for airline positions. Although many of the improvements to training being considered are for existing airline pilots—such as those related to decision making and operating in a crew environment—they may also apply to initial pilot training. We are not making recommendations in this area, because FAA has initiated some efforts and has plans for other efforts to address pilot training issues. For example, FAA plans to establish a government and industry working group during fiscal year 2012 to address issues related to pilot certification testing standards and training. Therefore, we encourage FAA to continue its efforts, with industry and academia collaboration, in reviewing the initial pilot training process, including ground school content, training hour requirements, and knowledge testing for commercial pilot certification under the Part 61 regulations. FAA’s oversight of key functions for the initial pilot training process in the United States by which commercial pilots become certificated is reasonably sound. We found that FAA completed most required inspections of vocational pilot schools (Part 141), but the agency’s data sources did not provide certainty that the level of oversight is consistently being performed in accordance with the agency’s guidelines and policies, including oversight requirements for pilot examiners. FAA’s national office does not monitor the annual completion of the requirements outlined in the annual NPGs related to monitoring pilot schools and pilot examiners. Therefore, the national office cannot readily determine the level of oversight its field staff is providing for these key stakeholders and has been unable to produce this information. Better internal control mechanisms, such as creating and reviewing agencywide reports using PTRS data periodically, would improve FAA oversight by providing assurance that all required inspections were completed. Additionally, FAA could establish standard procedures for entering PTRS data for inspectors at the local level of the completed inspection areas to avoid the uncertainty of whether or not they were completed as required. This would not only help FAA better measure its performance in meeting the annual NPG inspection requirements, but would also enable the agency to make more informed decisions about the scope of its discretionary, planned work activities for flight instructors. Further, the steps taken by FAA to develop a quarterly assessment report on the oversight and performance of pilot examiners is promising, but FAA should also consider developing a similar process for oversight of flight instructors because it could identify potential areas of concerns. We are making two recommendations to improve FAA’s oversight of pilot certification and training. The Secretary of Transportation should direct the Administrator of the Federal Aviation Administration to develop a comprehensive system that may include modifying or improving existing data systems to measure performance for meeting the annual National Program Guidelines’ inspection requirements for pilot schools with a Part 141 certificate and pilot examiners and better understand the nature and scope of the discretionary, planned inspections for flight instructors. We provided a draft of this report to DOT for review and comment. In responding to our recommendations, FAA officials said that they agreed that improvements in oversight data were needed but indicated that the quarterly assessment reports already measure the level of oversight of pilot examiners and summary data is being provided to the national office and regional division managers. Additionally, the designee management system currently under development will address the recommendation for pilot examiner designees in a more permanent way. We retained the recommendation; our report notes the oversight improvements underway for pilot examiners, and we will assess the effectiveness of the designee management system once it is implemented. FAA also provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies of this report to the Secretary of Transportation and appropriate congressional committees. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or by e-mail at dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To address our objectives and the pilot supply appendix, we reviewed and synthesized literature on pilot training and certification in the United States. Specifically, we reviewed a range of reports from GAO, Federal Aviation Administration (FAA), Congressional Research Service, International Civil Aviation Organization, National Transportation Safety Board, and Bureau of Labor Statistics that included general background information on a variety of related issues on U.S. pilot training, such as pilot certification and training issues in the U. S.; historical trends, current supply and demand, and forecasts for commercial airline pilots; types and requirements of pilot training schools; FAA regulatory training requirements for different levels of pilot certification; FAA oversight of U.S. pilot training system; and comparable international pilot training systems. We conducted a literature review search from databases, such as ProQuest, TRIS, and Lexis, as well as well as trade publications and literature from aviation stakeholders. Furthermore, we reviewed the Federal Aviation Regulations related to training and certification for pilots under Part 61, Part 141, and Part 142. We also reviewed FAA regulations, policy, and oversight documentation related to pilot training, management guidance for overseeing pilot schools, designated pilot examiners (pilot examiner), and certified flight instructors (flight instructor). Additionally, we reviewed FAA’s national program guidelines for fiscal years 2006 through 2010 regarding the required and planned oversight activities as well as FAA’s practical test standards used to certificate pilots as private, commercial, and airline transport. We also reviewed provisions of the Airline Safety and Federal Aviation Administration Extension Act of 2010 (Pub. L. No. 111-216) related to “Flight Crewmember Screening and Qualifications” and “Airline Transport Pilot Certification.” To review FAA’s inspection and enforcement activities related to pilot schools, pilot examiners, and flight instructors, we obtained FAA’s inspection and enforcement policies and analyzed raw data from FAA’s inspection and enforcement databases. We analyzed data from the Program Tracking Reporting System (PTRS) for inspections that closed (had a closing date) in fiscal years 2006 through 2010 and data from the Enforcement Information System (EIS) for enforcement actions with a date of final action in those fiscal years. To assess the reliability of the inspection and enforcement data that we received from FAA, we performed electronic testing of the data elements that we used, obtained and reviewed documentation about the data and the systems that produced them, and interviewed knowledgeable FAA officials. We used these data to determine the extent to which FAA had completed all required inspections of pilot schools and pilot examiners, and planned inspections of flight instructors. For pilot schools with a Part 141 certificate, we analyzed data from PTRS on the numbers of required inspections that FAA completed, and whether all five inspection areas were covered, and compared to the requirements set in FAA’s National Program Guidelines (NPG). The NPG for each fiscal year indicated that an inspection was required for each school within each region and to include under operations: (1) air agency facility inspection (PTRS activity number 1640) and (2) student records (PTRS activity number 1649). The NPG for each fiscal year also indicated that an inspection was required for each school within each region and to include under airworthiness: (1) pilot school facility (PTRS activity number 3650), (2) airworthiness directive compliance (PTRS activity number 3667 or 5667), and (3) part 141 ramp (PTRS activity number 3664 or 5664). To determine the nature and scope of the enforcement actions that FAA closed against pilot schools with a Part 141 certificate, we analyzed data on these actions from EIS, including whether the actions were administrative, fines, or suspensions or revocations of schools’ Part 141 operating certificates. We also analyzed data to determine the minimum, median, and maximum dollar amounts of fines and durations of suspensions. We tested the reliability of the PTRS and EIS data that we received from FAA with electronic testing of the data elements that we used, obtained and reviewed documentation about the data and the systems that produced them, and interviewed knowledgeable FAA officials. We found the data to be sufficiently reliable for our purposes. For pilot examiners, we analyzed data from PTRS on the numbers of required inspections that FAA completed and compared to the requirements set in the NPG. The NPG for each fiscal year indicated that an inspection was required for each pilot examiner within each region under: pilot examiner—large-turbojet (PTRS activity number 1664) and pilot examiner—other (PTRS activity number 1665). We also obtained and reviewed summary data from FAA contained in quarterly assessment reports on the oversight of its designated representatives, including pilot examiners. The quarterly reports contained data from July 2009 through March 2011. For flight instructors, the PTRS activity number 1662 is used to record CFI inspections in PTRS. The NPG for fiscal years 2009 through 2010 did not indicate inspections for certificated flight instructors as a planned work activity. To determine the nature and scope of the enforcement actions that FAA closed against flight instructors, we analyzed data on these actions from EIS, including whether the actions were administrative, fines, or suspensions or revocations of flight instructors’ certificates. We also analyzed data to determine the minimum, median, and maximum dollar amounts of fines and durations of suspensions. We interviewed government officials at the Federal Aviation Administration and National Transportation Safety Board. We conducted semistructured interviews with representatives from a range of aviation stakeholder organizations (see below). We also interviewed researchers involved in the pilot source study. We visited pilot training stakeholders in six states included in FAA regions that had higher number of pilot schools, higher number of pilot certificates issued in 2009, while taking into consideration the presence of FAA regional and district offices and regional airlines’ offices in some locations. Thus, in our visits to Arizona, Florida, Georgia, Indiana, Maryland, and Utah, we interviewed officials at FAA regional and district offices, regional airlines, pilot schools, and college aviation schools. However, because we selected these six states as part of a nonprobability sample, our findings cannot be generalized to all pilot training stakeholders in the United States. Through the combination of site visits and semistructured telephone interviews, we interviewed representatives of 24 regional airlines that transported about 97 percent of regional passengers in 2009, according to the Regional Airlines Association’s 2010 annual report. (See table 5.) In addition, we conducted site visits to the France, Germany, Netherlands, and United Kingdom. We focused on these European countries’ pilot certification and training requirements because they offer a different model than the United States. The site visits allowed us to obtain information on European countries’ pilot standards, as well as their efforts to revise their training requirements from traditional training objectives and methodology to competency-based training models. However, because we selected these four countries as part of a nonprobability sample, our findings cannot be generalized to all European countries. During these site visits, we interviewed officials at the European Aviation Safety Agency (Europe’s aviation regulator), civil aviation authority officials, representatives from international and European aviation stakeholder groups, representatives from airlines, and flight training organizations (schools). However, because these four countries were selected as part of a nonprobability sample, the findings from our interviews cannot be generalized to all European countries. (See table 6.) We conducted this performance audit from March 2010 through November 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. For several years now, the issue of whether the United States will maintain an adequate supply of qualified pilots has emerged in literature, media sources, and aviation industry discussions, but the scope of the supply of and demand for U.S. airline pilots is unknown and difficult to determine. The number of pilot certificates held in the United States has been declining and is expected to continue to decline in the near future. Demand, which has historically fluctuated, is projected to significantly increase over the next 20 years. Nevertheless, the International Civil Aviation Organization (ICAO) is predicting a surplus of pilots in North America, based, in part, on the regional capacity to provide pilot training, level of current and projected pilot population, and the slow growth rate of the aircraft fleet requiring additional pilots to operate them. However, certain factors could affect that projected surplus, including the 2007 legislation extending the mandatory retirement age for airline pilots and the sharp curtailment of military pilots as a hiring resource. Particularly given the decrease in the number of military pilots available to be hired by airlines, the number of students who enroll in and complete training in pilot schools will also affect the supply of commercial pilots, as the availability of pilots for entry-level regional airline jobs can be directly linked to the number of pilot certificates issued. Student enrollment in pilot schools is purported to be declining and the dropout rate for completing pilot training is high. Furthermore, others factors present potential challenges to the pilot training industry, such as available financial sources to fund pilot training and the impact of the 2010 legislation requiring additional pilot certification requirements for airline first officers. The safety and economic contribution of the air transportation system not only relies on having well-trained airline pilots but also on having enough of them to meet demand. The demand for commercial pilots is a function of the size the commercial airline fleet and the number of pilots needed to operate that fleet size. Demand for professional pilots in the United States has historically fluctuated, driven by a number of factors including consumer demand for air travel and the general state of the economy, fuel prices, regulatory changes, and aircraft fleet changes. For example, since 2008, the economic recession has significantly affected the U.S. airline industry. Several airlines have filed for bankruptcy and ceased operations, while other airlines have merged. It is still unknown when and to what extent the U.S. airline industry will fully recover, but demand for air travel is highly cyclical and largely reflective of the state of the economy, and expected to increase significantly over the next 20 years. The total number of pilot certificates held in the United States has been fluctuating downward over the last 10 years. The number of active student, private, commercial, and airline transport pilot certificates held in the United States decreased from 608,079 in 2000 to 554,237 in 2009, or 9 percent. The Aircraft Owners and Pilots Association (AOPA) predicts that, by 2014, the number of pilots in the United States will decline to 500,000. In late 2010, Boeing projected the commercial aviation industry in North America will require over 97,000 new pilots over the next 20 years (over 466,000 new pilots globally) to accommodate the strong demand for new and replacement aircraft, but noted that emerging markets in Asia, especially in China, will experience the greatest need for pilots. As demand for air travel continues to increase internationally, the demand for qualified pilots will likely continue to grow in many parts of the world, particularly in Asia and the Middle East and possibly attract qualified commercial pilots from the U.S. market. For instance, Boeing forecasts that China will need over 70,000 new pilots by 2029 for new commercial aircraft. In order to deal with the lack of available pilots today, some foreign airlines have started training their pilots through established flight training organizations or sending their students to pilot schools in English speaking countries, such as the United States. However, if industry projections are realized and commercial airlines, both mainline and regional, need large numbers of pilots, it is uncertain whether an adequate supply of qualified pilots will be available. However, despite these projections, ICAO is reporting a likely surplus of pilots in North America. ICAO noted in its 2011 report on the 20-year global and regional forecast on civil aviation personnel that the United States, with close to 270,000 professional pilots (i.e., commercial or airline transport), accounts for approximately 58 percent of the global pilot population. Furthermore, the United States has an annual training capacity for producing about 27,000 professional pilots annually. Based on the projected annual need for new pilots and considering several factors, ICAO determined that the United States will likely have a surplus of pilots under all of the scenarios it considered for the North American region (i.e., the United States and Canada). This may seem surprising considering the general perception that a pilot shortage in the United States is inevitable, but was supported by ICAO based on the following reasons. First, the annual training capacity in the United States represents 60 percent of the worldwide training capacity; thus, the ICAO calculations indicate that North America has more than enough training capacity to train the number of new pilots needed in this region. Second, fleet growth rates—i.e., the number of aircraft needing pilots—for the North American region for which the United States is the dominant player, are relatively low. Third, the current number of licensed professional pilots in the United States represents 58 percent of the global pilot population. Although these factors do point to a surplus, other economic and demographic factors that can be difficult to determine could affect the projected surplus. The temporary impact of recent legislation could affect the availability of commercial pilots in the future and other measures may be required to address related long-term issues. When recent concerns arose about a potential pilot shortage, the Fair Treatment for Experienced Pilots Act was enacted (age 65 law) in 2007 and allowed pilots to fly domestic routes until age 65 instead of the prior mandatory retirement age of 60. For international flights, one pilot may be up to age 65 provided the other pilot is under age 60, consistent with the November 2006 ICAO standard. This effort attempted to ensure there would not be a large group of highly skilled pilots retiring at the same time, but according to ICAO data, pending age-related retirements will begin to reemerge in 2012. The 2011 ICAO report also indicated that approximately 85,000 professional pilots (31 percent of the total pilot population) are aged 55 or more; among this age group, 34,000 (13 percent of the total pilot population) will be eligible for retirement by 2015. Another 34,000 will be eligible for retirement by 2020. The report also asserted that it is likely, considering the age structure of pilots in North America, that retirements will take place at an accelerating rate, thereby contributing to a potential increase in future pilot demand. However, all of the pilots holding commercial or airline transport pilot certificates in the United States may not be automatically hired by airlines due to performance and skills deficiencies. Therefore, despite a projected surplus of professional pilots, having only a potentially limited supply of pilots that airlines determine to be qualified to hire could still affect the air transportation industry in the United States. In addition, one of the traditional, preferred sources of airline pilots has been severely curtailed, which will likely continue to affect the availability of pilots in the future. U.S. commercial airlines are no longer hiring as many pilots from the U.S. military as they had in the past, which has historically provided airlines with a steady supply of highly qualified pilots. According to literature sources, until the 1990s, roughly 90 percent of the pilots hired by mainline airlines came from the U.S. military, with the remaining source being from civilian aviation. In 2008, about 30 percent of pilots hired by commercial airlines had military training backgrounds. Moreover, the military is training fewer pilots and retaining more of them with better pay and other financial incentives. In addition, many commercial airline pilots from Vietnam era are reaching retirement age. Particularly given the decrease in military pilots as a source of potential civilian pilots, the number of students who enroll in and graduate from pilot schools also affects the supply of pilots, and could affect the ICAO projected surplus of pilots for the North America region, specifically in the United States. The availability of pilots for entry-level regional airline jobs can be directly linked to the number of student pilot starts and the number of pilot certificates issued. In the past few years, the Federal Aviation Administration (FAA) has revised its forecast to reflect an uncertain number of student pilots. In 2008, FAA predicted 100,200 student pilots in the year 2025, but lowered that estimate in its 2009 through 2025 forecast report to 86,600. However, in FAA’s 2011 through 2031 forecast report, FAA predicted the number of student pilots increasing to 113,500 in 2025 (120,600 in 2031), due to a rule change by FAA that makes student pilot certificates valid longer for pilots under the age of 40. Also, from 1999 through 2009, the number of private certificates issued decreased about 26 percent, but commercial certificates slightly increased about 10 percent. Representatives of the pilot training industry that we interviewed told us that the overall trend reflects a lower number of domestic students starting pilot training, a trend which continued into 2011. Representatives from a few pilot schools told us that pilot schools across the United States experienced large declines in enrollments in 2010. For example, the University of Illinois decided to eliminate its 65-year-old aviation school after a decade of declining enrollment. The collegiate aviation school produced fewer degrees and served fewer students than any other program on the campus during the previous school year and no aviation students had been accepted for the coming semester. The decline of student pilot certificates and student dropout rate are concerns to AOPA. According to AOPA, the number of current student pilot certificates is the best indicator of the supply of future pilots. Representatives of several pilot schools with whom we spoke told us that they were becoming more and more dependent on foreign students to maintain their pilot training operations. Also, according to FAA officials, representatives of AOPA, four pilot schools, and several European organizations we interviewed said that some European and Asian airlines and pilot training organizations send students to the United States for pilot training to take advantage of the relatively inexpensive fuel and the year- round weather conditions for training in states such as Florida and Arizona. In addition, the dropout rate of student pilots could affect the supply of pilots. According to a 2010 AOPA study, almost 80 percent of student pilots drop out of training for four key reasons: (1) lack of educational quality, (2) lack of customer focus (not a good value or pricing not competitive), (3) insufficient sense of community (lacks an atmosphere that makes students feel welcome in the aviation community), and (4) lack of information sharing (school does not provide realistic estimate of time and costs required for a pilot certificate and statistics on student success rates at schools). The study indicated that while cost of training was a factor, it was less significant than the other four reasons people did not complete training—even though some literature suggests otherwise as discussed below. AOPA, as a result of this information regarding the dropout rate, has initiated efforts with close coordination with the pilot training industry to work on solutions to stop the outflow of students and to increase the pilot population. AOPA has begun an initiative focused on student retention in pilot training that will consist of a series of regional meetings across the United States to collect perspectives and industry input on potential improvements that can be made in providing pilot training. AOPA has met with representatives from the aviation community, including pilots, student pilots, aviation businesses, pilot schools, and flight instructors who are currently involved in conducting pilot training. In addition, the association has launched a Let’s Go Flying Web site (www.aopa.org/letsgoflying) and created various publications to inspire more people to become interested in learning to fly by providing an introduction to and information about becoming a pilot. Aviation stakeholders from 4 of the 9 pilot schools, two industry associations, and one industry organization we interviewed told us that one of most important challenges for maintaining an adequate supply of students for pilot schools is the availability of financial support for pilot training. Pilot training costs vary amongst the pilot schools. According to AOPA and other sources, the cost of training from the beginning of training through a commercial pilot certificate and a multiengine rating could be in excess of $40,000 but varies across the country. According to AOPA, pilot training can cost about $100,000 or more at most collegiate aviation schools for a 4-year degree and the flight training provided. The University Aviation Association (UAA) indicated that there is no single comprehensive information source regarding the cost of pilot training across the United States, but AOPA officials told us costs vary between $40,000 and $100,000 for the training needed to obtain a commercial pilot certificate and a multiengine rating. Furthermore, as previously stated, pilots would likely be responsible for any increased costs if additional training requirements were developed by FAA. The costs are high compared to the low starting salaries. According to a 2009 survey by the Regional Airline Association (RAA) of member airlines based on 2008 salary data, the salaries for regional pilots generally averaged between $28,000 and $43,000 for regional airline first officers and $62,000 to $102,000 for regional airline captains. RAA estimated that salaries have increased 2 to 5 percent since but it does not have more current data. Some stakeholders we interviewed said that these realities could potentially prevent prospective students from enrolling in a pilot school or reduce the desire to become a pilot. However, it is important to note that the previously mentioned AOPA study reported that that training costs were not a statistically significant reason individuals dropped out of training. A range of tuition resources is available, but some of these sources are drying up. To pay for pilot training, AOPA officials said that students typically use personal funds, personal credit (credit cards and personal loans), scholarships, grants, parent or student loans, other educational loans, and the Veterans Administration’s (VA) Montgomery GI Bill benefits. For example, AOPA offers loans for pilot training up to $25,000 and allows flexible funding options for students since AOPA does not limit the use of funds to certain types of schools or training. One benefit of collegiate schools and some vocational schools is that they are generally eligible to receive VA benefits and federal financial aid (such as federal education loans or grants). However, flight instructor-based schools do not generally qualify for federal financial aid or VA benefits. VA benefits allow a qualified student to be reimbursed up to 60 percent for ground and pilot training costs to the maximum allowable limit, but do not fund training for a private pilot certificate. Many financial institutions have provided financing for pilot training through educational loan programs, such as the Federal Family Education Loan program, and personal loans. According to literature we reviewed and representatives of four pilot schools and three industry associations we interviewed indicated that many of the private banks have been tightening restrictions on financing available to potential new pilot students, and others have left the pilot training loan market. The National Association of Flight Instructors reported that Sallie Mae took a $1 billion loss in 2009 for educational loans, which explains, in part, the reason it has become increasingly difficult to obtain funding. We reported in November 2009 that many lenders offering student loans have exited the market in response to limited access to capital resulting from the credit crisis. Thus, lenders have begun to give pilot training a higher risk profile than in the past and have been slowly exiting this loan market. Representatives of the regional airlines we interviewed and stakeholder associations have voiced a significant level of concern regarding the legislative mandate to increase the number of flight hours needed for first officers to be hired by commercial airlines. As discussed earlier in the report, the Airline Safety and Federal Aviation Administration Extension Act requires that FAA develop regulations requiring all airline pilots, including the first officers, to hold an airline transport pilot certificate—the highest level of pilot certification and requiring the highest number of total flight hours—instead of the commercial pilot certificate that is required today. Aviation stakeholders have voiced significant concerns that requiring first officers for regional airlines to possess an airline transport pilot certificate will likely result in the inability to fill some positions due to the lack of qualified pilots. The overall decreasing trends in pilot certificates being issued is illustrated in table 4 with the general decline in the number of airline transport pilot certifications issued from 1999 through 2009 from pilot schools operating under Part 61, Part 141, and Part 142 regulations. The number of airline transport certificates issued decreased about 60 percent. Aviation stakeholders such as AOPA and RAA have both also voiced concerns that increasing pilot certification requirements to become an airline pilot could significantly discourage potential pilots from entering aviation due to the increased time and expense required to meet the new hiring minimums of airlines. For example, regional airline officials we interviewed said the new requirement will create a gap in experience for new pilots seeking entry level airline jobs that could take several years to fill. Furthermore, AOPA said the cost of obtaining the additional 1,250 flight hours needed to meet the total time for an airline transport pilot certificate could deter many new pilots into pursuing other professional careers by increasing the number of flight hours to obtain employment as an airline pilot. Additionally, the 2011 ICAO report mentioned earlier stated that even though the United States has sufficient training capacities, the requirement that all commercial airlines’ first officers have an airline transport pilot certificate could drastically limit the availability of first officers to support existing delivery schedules for new aircraft at many airlines. The majority of officials representing the regional airlines we interviewed also indicated that the proposed FAA rules to revise pilot duty and rest requirements will potentially create a need for more pilots. Provisions in the Airline Safety and Federal Aviation Administration Extension Act of 2010 directed FAA to issue a regulation to specify limitations on the hours of pilot flight and duty time to address problems relating to pilot fatigue. NTSB has long been concerned about the possible safety effects of fatigue in the aviation industry—specifically pilots—and have issued several safety recommendations to FAA after identifying it as a contributing factor in several aviation accidents. FAA identified the issue of pilot fatigue as a top priority following the Colgan Air crash, and the NTSB accident report stated the pilots’ performance was likely impaired because of fatigue, but the degree to which it contributed could not be conclusively determined. The proposed changes to the duty and rest requirements of pilots will likely mean airlines need more pilots to comply with the new rules. The officials said the industry will likely see a greater demand for these additional pilots over the next few years, but this will depend on each airline’s staffing ratios for its aircraft and flight operations. However, the general response was that the rule changes would require more qualified pilots in order to maintain the same level of service. In addition to the contact named above, the following individuals made important contributions to this report, Keith Cunningham, Assistant Director; Richard Brown; Owen Bruce; Vashun Cole; Cindy Gilbert; Brian Hackney; Bob Homan; David Hooper; Amber Keyser; Nitin Rao; Amy Rosewarne; and Michael Silver. | Regional airlines have experienced the last six fatal commercial airline accidents, and pilot performance has been cited as a potential contributory factor in four of these accidents. As a result, Congress and others have raised questions about, among other issues, the initial pilot education and training required before pilots can be hired by airlines, at which time they receive further training. The initial training is provided by pilot schools overseen by the Federal Aviation Administration (FAA). As requested, this report discusses (1) the various types of U.S. pilot schools, how they compare, and associated issues; (2) key similarities and differences between the U.S. and international approaches to pilot training; and (3) how and to what extent FAA carries out oversight of pilot training and certification. To address these issues, GAO reviewed literature, legislation, regulations, and FAA documents and inspection and enforcement data; interviewed agency and industry officials; and studied the training approach in Europe because of the different training model and visited four European countries. The approximately 3,400 pilot schools in the United States can be divided into three types: (1) flight instructor based, (2) vocational, and (3) collegiate. The school types vary in several ways, but all pilot students must pass the same knowledge and flight tests to obtain a pilot certificate from FAA. Airline operations have evolved operationally and technologically, but the pilot training requirements for certification of commercial pilots were last revised in 1997. FAA and some industry stakeholders have indicated that current requirements for commercial pilots should incorporate additional training to improve the competency of entry-level regional airline pilots. FAA has initiated or planned a number of efforts to address these issues and recently enacted legislation requires FAA to implement regulations to increase pilot requirements for airlines by August 2013. The U.S. and Europe both offer the same pilot certifications but the training models differ, in part, due to training philosophies and other circumstances. The U.S. training approach emphasizes proficiency on actual flight training, while Europe's approach tends to emphasize academic instruction with more knowledge training requirements and testing. European pilot schools have also developed more comprehensive student screening processes than in the U.S. FAA has an annual inspection program that includes the oversight of pilot schools, pilot examiners, and flight instructors, the gatekeepers for the initial pilot training process. GAO analysis of FAA inspection data showed a 78 percent completion rate of the required inspections for pilot schools in fiscal year 2010, but, due to insufficient information, GAO was unable to determine completion percentages for prior years. Similarly, GAO could not determine 1) whether FAA completed the required inspections for pilot examiners or 2) the reasons that the discretionary inspections of flight instructors--which are generally optional--were conducted. Furthermore, FAA's national office does not adequately monitor the completion of annual inspection activities due, in part, to an inability to aggregate inspection data from the local district offices that conduct the inspections. Thus, FAA does not have a comprehensive system in place to adequately measure its performance in meeting annual inspection requirements, which could make it difficult to ensure regulatory compliance and that safety standards are being met. |
As established by the Civil Service Reform Act of 1978, federal law generally prohibits retaliation against federal government employees or applicants for employment for reporting wrongdoing, or whistleblowing. Under these provisions, most federal employees pursue whistleblower retaliation complaints with OSC and MSPB. However, the FBI, as well as other intelligence agencies, is excluded from this process. Instead, the Attorney General is required to establish regulations to ensure that FBI employees are protected against retaliation for reporting wrongdoing, consistent with certain statutory processes of OSC and MSPB. Since the Civil Service Reform Act of 1978 was enacted, numerous amendments have been made to the provisions governing most executive branch whistleblowers, but corresponding amendments have generally not been made to the statutory provision governing FBI employees. Provisions providing recourse for employees of intelligence community elements who are retaliated against for making disclosures of protected information were established by Presidential Policy Directive 19 in 2012, and in statute in 2014. In order to implement the statute governing FBI whistleblower protections, in 1998, DOJ issued regulations to protect FBI whistleblowers from retaliation for reporting alleged wrongdoing, and established the process for handling FBI whistleblower retaliation complaints. regulations prohibit DOJ employees from taking or failing to take (or threatening to take or fail to take) a personnel action with respect to any FBI employee as a reprisal for a protected disclosure (i.e., retaliation). The regulations also define what disclosures by FBI employees qualify as protected disclosures, entitling the employees to recourse should they experience retaliation. Specifically, the regulations state that disclosures are protected if the complainants 1. reasonably believe that they are reporting wrongdoing, defined as a violation of any law, rule, or regulation; mismanagement; a gross waste of funds; an abuse of authority; or a substantial and specific danger to public health or safety, and 2. report the alleged wrongdoing to one of nine designated officials or offices (e.g., the Attorney General, the DAG, and OIG, among other entities). 28 C.F.R. pt. 27. If the FBI employee does not meet either of these two criteria, then that person’s disclosure is not protected and the person does not have a right to recourse if the individual should experience retaliation as a result. That is, for example, if the person reports wrongdoing to a nondesignated entity and then experiences retaliation, the person will not be eligible for corrective action for that retaliation. Further, once the employee reported to a nondesignated entity and experienced retaliation as a result, the employee cannot subsequently report the alleged wrongdoing to a designated entity and obtain corrective action for the retaliation that has already taken place. The regulations lay out DOJ’s process for handling FBI whistleblower retaliation complaints and describe various offices’ responsibilities for investigating, adjudicating, and reviewing appeals related to these complaints. See figure 1. Investigation: OIG and DOJ-OPR are responsible for receiving and investigating FBI whistleblower retaliation complaints to determine whether there are reasonable grounds to believe that a retaliatory act has been or will be taken (“reasonable grounds” determination). The office that investigates the complaint (referred to as the investigating office) first reviews the complaint to determine whether it meets threshold regulatory requirements. A complaint that did not meet threshold regulatory requirements means a complaint where DOJ’s decision to terminate the complaint was not based on whether there was a reprisal taken because of a disclosure, but on whether the allegations met threshold requirements. For example, the investigating office may determine that the complaint does not meet threshold regulatory requirements because the complainant did not make his or her underlying disclosure to one of the nine entities designated in the regulations; or because the alleged retaliatory personnel action occurred before the complainant made a protected disclosure and therefore could not have been caused by the protected disclosure. If the complaint does not meet threshold regulatory requirements, then the investigating office closes the complaint. However, if the investigating office determines that the complaint met threshold regulatory requirements, then the office investigates the merits of the complaint by, for example, conducting interviews and requesting and reviewing documentation, such as employee statements and records from the FBI. At the conclusion of an investigation, if OIG or DOJ-OPR finds that there are reasonable grounds, it then forwards its investigative report with any recommended actions to OARM for adjudication. In cases in which OIG or DOJ-OPR has not found in the complainant’s favor or has not completed its investigation, the complainant may go directly to OARM to request corrective action. Adjudication: OARM is responsible for adjudicating FBI whistleblower retaliation cases. OARM receives these cases from OIG or DOJ-OPR where either office has determined there are reasonable grounds to believe that there has been or will be reprisal for a protected disclosure, or else directly from the complainant. As with the investigating offices, when OARM receives the complaint, OARM first determines whether the complaint meets threshold regulatory requirements, before proceeding to review the merits of the complaint. For OARM, considering the merits of the complaint entails reviewing the supporting evidence (e.g., documents and testimony), as well as the arguments each party—the complainant and the FBI—submits, and then determining, based on all of the evidence, if the individual substantiated the claim of retaliation. If the complaint is substantiated and the FBI is unable to prove by clear and convincing evidence that it would have taken the same personnel action even if the complainant had not made the protected disclosure, OARM will order that the FBI take corrective action, such as providing the complainant back pay or reimbursement for attorney’s fees. Appeals: DOJ’s DAG is responsible for reviewing and ruling on parties’ appeals of OARM decisions. Once OARM rules on a case, the parties have 30 days to file an appeal with the DAG. The DAG has the authority to set aside or modify OARM’s decisions when found to be arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law; obtained without procedures required by law, rule, or regulation having been followed; or unsupported by substantial evidence. The DAG has full discretion to review and modify the corrective action ordered. DOJ’s regulations also set forth timeliness and reporting requirements for the investigating offices. Specifically, the investigating offices must provide written notice to the complainant acknowledging receipt of the complaint within 15 calendar days of either investigating office receiving it; update the complainant on the status of the investigation within 90 days of the written acknowledgment, and continue providing such updates every 60 days thereafter; and determine within 240 calendar days of receiving the complaint whether there are reasonable grounds to believe that there has been or will be a reprisal for a protected disclosure, unless the complainant agrees to an extension. Additionally, if OIG or DOJ-OPR decides to terminate an investigation, the office must provide a written status report to the complainant at least 10 business days prior to the office’s final termination report. The final report must summarize the relevant facts of the case, provide reasons for terminating the investigation, and respond to any comments the complainant submits in response to the above-mentioned status report. DOJ closed the majority of the 62 complaints we reviewed within 1 year, generally because the complaints did not meet DOJ’s threshold regulatory requirements. The most common reason these complaints did not meet DOJ’s threshold regulatory requirements was because the complainants made their disclosures to individuals or offices not designated in the regulations. Further, FBI whistleblowers may not be aware that they must report an allegation of wrongdoing to certain designated officials to qualify as a protected disclosure, in part because information DOJ has provided to its employees has not consistently explained to whom an employee must report protected disclosures. The 4 complaints we reviewed that met DOJ’s threshold regulatory requirements and OARM ultimately adjudicated on the merits lasted from 2 to just over 10.6 years from the initial filing of the complaints with OIG or DOJ-OPR to the final OARM or DAG ruling. In some cases, parties have waited a year or more for a DOJ decision without information on when they might receive it. Figure 2 shows the duration and outcome of all 62 complaints we reviewed. DOJ closed 44 of the 62 complaints (71 percent) that we reviewed within 1 year, most often because the complaint did not meet DOJ’s threshold regulatory requirements. Specifically, for 40 of these 44 cases (91 percent), DOJ found that the complaint did not meet threshold regulatory requirements. In 15 of the 32 (47 percent) complaints closed within a year where documentation in the case files was sufficient for us to determine why DOJ determined threshold requirements were not met, the fact that the complainant made a disclosure to the wrong person— someone not designated in the regulations to receive whistleblower complaints—was at least a partial basis for DOJ deciding the complaint did not meet threshold regulatory requirements. In at least 12 of these 15 instances, the complainant reported the alleged wrongdoing to someone in management or within the complainant’s chain of command, such as the complainant’s supervisor, who was not one of the nine designated entities. For all 54 complaints we reviewed where documentation in the case files was sufficient for us to determine a specific reason DOJ closed the complaint, regardless of how long DOJ took to close the complaint, 23 (43 percent) had at least one claim dismissed because the complainant made his or her disclosure to an official or entity not designated in the regulations. Of these, in at least 17 cases, we were able to determine that a disclosure was made to someone in the employee’s chain of command or management. See appendix II for a summary of DOJ’s final determinations in all cases we reviewed. Unlike employees of other executive branch agencies—including intelligence agencies—FBI employees do not have a process to seek corrective action if they experience retaliation based on a disclosure of wrongdoing to their supervisors or others in their chain of command who are not designated officials. In 1978, federal law excluded the FBI, as well as other intelligence agencies, from the prohibited personnel practices system in place for employees of other executive branch agencies in part because of the sensitive nature of these agencies’ operations and the information they handle. Instead the law required the Attorney General to develop regulations to ensure that FBI employees are not retaliated against for disclosures of wrongdoing. When issuing its interim and final regulations in 1998 and 1999, respectively, DOJ considered which individuals and offices the Attorney General would designate to receive protected disclosures from FBI employees. DOJ officials who developed these regulations included eight designated entities but did not include supervisors at that time because the officials maintained that Congress intended DOJ to limit the universe of recipients of protected disclosures, in part because of the sensitive information to which FBI employees have access.rule, DOJ responded to commenter suggestions to add additional entities to receive such disclosures—including FBI-INSD, supervisors, and coworkers. Among other things, DOJ stated its view that Congress contemplated that recipients for whistleblower disclosures would be a relatively restricted group and “to designate a large (and in the case of supervisors, arguably ill-defined) group of employees as recipients would be inconsistent with Congress’s decision, given the sensitivity of information to which FBI employees have access.” In addition, DOJ’s rule explained that “designating the highest ranking official in each field office, but not all supervisors, as recipients of protected disclosures . . . provides a way to channel such disclosures to those in the field who are in a position to respond and to correct management and other problems while also providing an on-site contact in the field for making protected disclosures.” In October 2012, the President issued Presidential Policy Directive 19, which established whistleblower protections for employees serving in the intelligence community, including, among other things, explicitly providing protection to employees who are retaliated against for reporting wrongdoing “to a supervisor in the employee’s direct chain of command up to and including the head of the employing agency.” Presidential Policy Directive 19 excluded the FBI from the scope of these protections, and instead required DOJ to report to the President on the efficacy of its regulations pertaining to FBI whistleblower retaliation and describe any proposed revisions to these regulations to increase their effectiveness. In response to this requirement, ODAG officials led an effort to review FBI whistleblower retaliation complaints filed from January 1, 2005, through March 15, 2014, and, consistent with our review, found that DOJ had terminated a significant portion of complaints because they were not made to the proper individual or office. In addition, DOJ officials met with whistleblower advocates and OSC officials to solicit their views and found that these individuals and officials recommended that DOJ broaden its regulations to protect disclosures to any supervisor in the employee’s chain of command. According to DOJ’s April 2014 report in response to Presidential Policy Directive 19, the whistleblower advocates noted that the directive instructs intelligence community elements to protect disclosures to any supervisor in the employee’s direct chain of command and that this is consistent with whistleblower protection laws that similarly protect other civil service employees. Further, DOJ’s report notes that OSC officials believe that to deny employees protection unless their disclosure is made to the high-ranked supervisors in the office would undermine a central purpose of whistleblower protection laws. In response to PPD-19, DOJ officials led by ODAG revisited their 1999 regulations and in April 2014 recommended expanding the persons to whom individuals can make protected disclosures to include—in addition to the highest-ranking official in FBI field offices, who is already included—the second highest ranking tier of officials in these field offices, which includes the two or three assistant special agents in charge in 53 field offices and the special agents in charge in the 3 largest field offices. Senior DOJ officials told us that DOJ leadership has approved this change and the agency is beginning the public notice and comment process; as of December 2014, DOJ has not issued any notice of proposed rulemaking or publicly moved forward on these stated plans. DOJ officials reported that they plan to evaluate the impact of this expansion, and they may choose subsequently to further expand the set of persons to whom an employee can make a protected disclosure, if DOJ determines that such expansion is warranted. However, as of December 2014, senior FBI and ODAG officials report that they do not have an estimated date or specific plans for this evaluation and could not provide specifics on how this evaluation would be conducted. DOJ officials gave us several explanations about why DOJ did not recommend expanding the list to include supervisors and others in the employee’s chain of command, a change that would bring the FBI into line with other executive branch agencies. First, in DOJ’s April 2014 report, DOJ officials state that “the Department believes the set of persons to whom a protected disclosure can be made is extensive and diverse, and has seen no indication that the list has impeded disclosures of wrongdoing.” However, when we asked officials how they arrived as this conclusion—particularly in light of our and DOJ’s previous findings that numerous complainants had at least one claim dismissed for making a disclosure to someone in management or their chain of command— they could not provide supporting evidence or analysis for their conclusions. Rather, these officials cited concerns about striking the right balance between the benefits of an expanded list and the level of resources the department would have to expend assessing more complaints if the department added more designated officials, and the potential impact of these additional complaints on the timeliness of the process. While DOJ’s focus on the timeliness of complaint processing is important, dismissing retaliation complaints made to an employee’s supervisor or someone in his or her chain of command who is not a designated entity leaves some FBI whistleblowers with no recourse if they experience retaliation. We found at least 17 whistleblowers whose cases were dismissed—at least in part—for making a disclosure of wrongdoing to someone in their chain of command or management. Our findings are similar to those of the ODAG-led review in which the department found that in a “significant portion” of OIG cases the claim was closed because it was not made to a proper individual or office under the regulations. This means that these employees had no recourse for retaliation they may have experienced for making those disclosures. Moreover, with respect to DOJ’s concerns about resources and timeliness, DOJ has discretion in determining its regulatory process for enforcing protections for FBI whistleblowers and, as described in more detail later this report, is taking other steps to improve the timeliness of the process. Senior FBI and ODAG officials also explained that the department plans to provide FBI employees with additional training on the list of entities designated to receive whistleblower complaints. While training could help provide information on how to make a protected disclosure, this planned training would have little effect for employees who initially raise a concern to their supervisors not expecting that this action would ever be a whistleblower disclosure. All seven of the whistleblower advocates and attorneys we interviewed who had relevant personal and professional experience stated that it is common practice for employees to report wrongdoing to their supervisors before reporting it to a more senior official, such as those designated in DOJ’s regulations. Further, two advocates we met with stressed that very few people intend to become whistleblowers. Rather, it is typical for employees who become aware of a problem to report it to their supervisors, expecting to resolve the issue at that level. In one FBI whistleblower case file we reviewed, the complainant wrote that “there is a practice in the FBI that a person is to go through his or her chain of command first.” Further, senior FBI officials we spoke with emphasized that FBI policy encourages employees to report allegations of wrongdoing to a broader group of entities than those designated in regulation as recipients of protected disclosures—including any supervisor in the chain of command of the person reporting. Last, senior FBI and ODAG officials noted that the statute establishing whistleblower protections for FBI employees differs from the statute governing protections for other federal employees, so there is no legal requirement that DOJ designate supervisors or others in an employee’s chain of command to receive protected disclosures. The separate statutory provision for the FBI has existed since enactment of the CSRA in 1978, but has generally not been revisited by Congress when passing amendments to legislation governing other executive branch whistleblowers. Over the years, Congress has passed amendments to the legislation covering employees in other executive branch agencies that explicitly strengthen and expand protections for other federal whistleblowers. For example, Congress added language clarifying that disclosures to supervisors who participated in the misconduct are protected disclosures. The Whistleblower Act of 1989 provides, among other things, that employees should not suffer adverse consequences as a result of prohibited personnel practices. The Senate report accompanying the Whistleblower Protection Enhancement Act of 2012 explained that, with regard to whistleblower retaliation matters, the focus should not be on whether or not disclosures of wrongdoing were protected, but rather whether the personnel action at issue in the case occurred because of the protected disclosure. However, changes to laws affecting other executive branch whistleblowers did not automatically extend to the FBI since the law governing FBI employees was in a separate provision of the original legislation. DOJ’s current regulations and its recommended changes deny FBI employees protection provided to employees of other executive branch agencies—including those in the intelligence community. Thus, DOJ risks dismissing, and potentially not addressing, instances of actual retaliation against individuals who reported their disclosure to their supervisors, or another entity not designated in the regulations. Dismissing these whistleblower retaliation complaints could deny whistleblowers access to recourse, could permit retaliatory activity to go uninvestigated, and may have a chilling effect on other potential whistleblowers. In the course of our review, in addition to several DOJ and FBI guidance documents that accurately describe DOJ’s FBI whistleblower regulations, we also found instances of DOJ guidance that could lead FBI employees to believe that reporting an allegation of wrongdoing to a supervisor in their chain of command would be a protected disclosure when that is not the case. First, FBI’s guidance—the FBI Domestic Investigations and Operations Guide, specifically—states that, in general, the FBI requires employees to report known or suspected failures to adhere to the law, rules, or regulations to any supervisor in the employees’ chain of command, or others, but does not clarify that such disclosures are protected only if reported to certain designated individuals or offices. Second, an April 2014 memo from the DAG to all DOJ employees— including FBI employees—encouraged employees to watch a video on whistleblower rights and protections and stated that employees may report waste, fraud, or abuse within the department to supervisors within their offices or the OIG, or outside the department to OSC. The memo did not clarify that FBI employees who report such allegations to their supervisors or OSC may not have the right to pursue corrective action should they experience retaliation for their disclosure. Senior ODAG officials acknowledged that if taken in isolation, this memo could cause some confusion for FBI employees but stressed that FBI employees should already be familiar with the FBI-specific policy from FBI-offered training and resources. However, we reviewed the two trainings FBI officials cited as educating FBI employees on the procedures to follow when making a whistleblower complaint, and neither training mentions DOJ’s regulations related to FBI whistleblower retaliation or the specific steps FBI employees need to take to ensure their disclosures are protected. OIG and FBI officials report that they are currently developing a training video that will address FBI-specific issues and will be required for all FBI employees. This planned training could improve employee awareness of the FBI-specific procedures, but such an effort could be undercut if unclear written policies and communications continue to be provided to FBI employees. Standards for Internal Control in the Federal Government provides that agencies should distribute pertinent information so employees may efficiently carry out their duties. Without clear information on the process for making a protected disclosure, including the individuals to whom a claimant can make a protected disclosure, FBI whistleblowers may not be aware that, depending on how they report their allegation, they may not be able to seek corrective action should they experience retaliation. OARM adjudicated the merits of 4 of the 62 complaints we reviewed (6 percent), and these 4 cases lasted from 2 to just over 10.6 years, from the initial filing of the complaints with OIG or DOJ-OPR to the final OARM or ODAG ruling.the whistleblower. As shown in figure 3, these 3 cases lasted from just over 8 to 10.6 years. In the fourth case, DOJ ruled in favor of the FBI and this case lasted approximately 2 years. In the last 3 years, and in light of the Presidential Policy Directive 19 requirement that DOJ assess the efficacy of its current process, DOJ officials have identified some opportunities to improve their timeliness in resolving whistleblower retaliation complaints and have taken some steps to do so. However, DOJ officials have limited plans to assess the impacts of these actions. Specifically, OARM has developed a mediation program, hired an additional staff person, and developed procedures with stricter time frames, while DOJ-OPR and OIG have taken steps to streamline their intake procedures. DOJ leadership is also considering taking steps to revise DOJ’s regulations to streamline OARM’s process upon receiving a new complaint. Developing a mediation program: In the spring of 2014, OARM launched an alternative dispute resolution program that will provide complainants with the option to pursue mediation with the FBI at any point from initial filing of the complaint to appeal. OARM officials anticipate that this option will help to expedite processing of some complaints that can be more quickly resolved through mediation and permit DOJ to focus limited resources on the remaining cases. As of October 1, 2014, two complainants had pursued mediation, but, according to OARM officials, because these cases are pending, it is too soon to analyze the impact of the mediation program. Hiring additional staff: To reduce the impact of competing priorities for limited staff, in November 2013, OARM senior officials stated that they hired a part-time attorney to help write OARM decisions in FBI whistleblower retaliation cases. OARM officials report that they have been able to reduce overall case-processing times, in good part because of the work of the part-time attorney. Developing procedures with stricter time frames: Senior OARM officials report that in June 2011 they met with an MSPB administrative judge and an MSPB senior executive to gather ideas for shortening the time frames in OARM’s cases. These officials further report that in response to the input from MSPB, in October of that same year OARM issued procedures that included stricter time frames for the complainant and FBI, such as shortening the period of time OARM initially provides for parties to gather evidence. In addition, OARM officials report that around this same time, they revised their practice of generally approving parties’ requests for extensions. The OARM officials report that they began reviewing requests for an extension more critically and often do not approve the full length of the extension requested. Streamlining intake procedures: Senior OIG officials report that they could improve their timeliness in processing initial complaints and have since taken steps to ensure that complaints are transmitted for initial review within 1 to 2 days of receipt, if possible. DOJ-OPR officials report that in the last 2 years, they have established a new intake procedure so that an intake attorney handles the initial notice to the whistleblower instead of waiting until the complaint is assigned to an investigator. Streamlining OARM’s process: DOJ’s April 2014 report to the President included a recommendation intended to expedite OARM’s process upon receiving new complaints. DOJ’s report states: “Under OARM’s current process, when a complainant files a request for corrective action with OARM, OARM usually forwards it to the FBI and provides the FBI 25 calendar days to file its response. In some instances, however, the allegations in a complainant’s request are so deficient that neither OARM nor the FBI can reasonably construe the specific claims raised.” Under the recommended revised procedures, where it appears that a complaint may not meet DOJ’s threshold regulatory requirements, OARM would give the complainant a very short time period to clarify why the case should not be dismissed. DOJ officials state that this could allow for quick resolution of cases that plainly fail to meet the threshold regulatory requirements and increase efficiency of case adjudication. As DOJ implements these changes intended to improve the efficiency of DOJ’s handling of FBI whistleblower retaliation complaints, as detailed above, assessing the impact would help DOJ officials ensure that these changes are in fact shortening total case length without sacrificing quality, and identify any additional opportunities to improve efficiency. OARM officials report that given the length of these cases, it is too early to assess whether the efforts implemented thus far are having the desired impact on the timeliness of OARM’s adjudication process, but they explained that in the future, they could use their case docket to determine impact. For example, they could review the number of cases resolved through mediation and whether the revised procedures from 2011 have made a difference in the time needed to adjudicate large cases. OARM’s stated plans to monitor the impact is a good first step by one of the relevant offices, but assessing the impact on timeliness and quality throughout the entire investigation, adjudication, and appeal process to determine the impact on total complaint-processing time will require a joint effort among OIG, DOJ-OPR, OARM, and ODAG. In DOJ’s April 2014 report to the President, DOJ stated plans to evaluate the impact of two policy changes to increase the effectiveness of DOJ’s regulations, but stated no such plans for the policy changes intended to improve DOJ’s timeliness in handling these complaints. Standards for Internal Control in the Federal Government calls for agencies to compare actual performance with planned or expected results and analyze significant differences. Without assessing the impact of its policy changes on the complete process, DOJ will not be in a position to gauge progress in fulfilling DOJ’s commitment to improving its efficiency in handling these complaints and correct course, if needed. Without assessment, it will be difficult for DOJ to know whether its various efforts to improve timeliness are working as intended. OIG and DOJ-OPR have not consistently provided complainants with status updates or obtained the complainant’s approval for an extension when the investigator reviewing the complaint needed more time, as stipulated under agency regulations. In the last 2 years, OIG developed a database to increase management oversight of investigators’ compliance with requirements to provide updates and obtain the complainants’ approval for extensions, but DOJ-OPR does not have a similar mechanism in place. In addition, OIG did not inform complainants of its intent before closing complaints it declined to investigate and did not consistently explain the basis for its decisions to complainants, but plans to begin doing so. OIG and DOJ-OPR have not consistently provided complainants with periodic status updates nor have they always obtained complainants’ approvals for extensions when the investigator reviewing the complaint needed more time, as required under DOJ’s FBI whistleblower regulations. Specifically, in 65 percent of the complaints we reviewed (37 of 57), the investigating office did not meet the regulatory requirement to contact the complainant to acknowledge that the office had received the complaint within 15 days of the date either OIG or DOJ-OPR received the complaint. In particular, OIG did not meet the requirement in 20 of 36 complaints (56 percent) and DOJ-OPR did not meet the requirement in 17 of 21 complaints (81 percent). See appendix III for more detail on the number and percentage of complaints in which OIG and DOJ-OPR met each reporting requirement. After the deadline to acknowledge that the office received the complaint, we saw evidence in the case files for the majority of complaints we reviewed (27 of 37, or 73 percent) that OIG and DOJ-OPR provided the first status update within the 90-day time frame; however, both offices were less consistent about meeting the time frames for subsequent status updates, which are required at least every 60 days. In 20 of 27 complaints we reviewed (74 percent)—including 8 of 12 OIG complaints and 12 of 15 DOJ-OPR complaints—we saw at least one period of more than 60 days during which the case file did not contain evidence that the investigating office had communicated with the complainant. In 8 of these 20 complaints, we identified only one 60-day period in which the case file did not contain evidence of communication with the complainant. However, in the other 12 complaints, we identified more than one 60-day period in which the case file did not demonstrate that the investigating office had communicated with the complainant. We considered a complaint to have met the 240-day requirement if the investigating office provided the complainant a final termination report or otherwise closed the complaint within 240 days from the date the office received the complaint. In 1 DOJ-OPR complaint we reviewed, the complainant initially provided an incorrect address and DOJ- OPR sent both a proposed and final termination report to the incorrect address within 240 days, but closed the complaint after more than 240 days because of the time needed to obtain the correct address. We excluded that complaint from our analysis with regard to this requirement. complaints within 240 days (40 of 57, or 70 percent).files for over half (10 of 17) of the complaints that exceeded 240 days— including 6 of 7 OIG complaints and 4 of 10 DOJ-OPR complaints—did not contain documentation that the complainant had agreed to an extension. The regulatory requirements help ensure that both complainants and the investigating office receive information necessary to make decisions regarding the complaint. For example, the requirement to send notice to the complainant within 15 days acknowledging that the office has received the complaint ensures that the complainant is aware of whom to contact within OIG or DOJ-OPR if he or she has questions or additional information to provide regarding their complaint. Further, three of the eight whistleblower advocates and attorneys we spoke with stated that regular communication between investigators and complainants ensures that complainants provide the investigating office with follow-up information that the office needs to make a timely and appropriate decision. In addition, as previously discussed, the regulations provide complainants the right to bring their complaints directly to OARM after 120 days if they have not received notice that the investigating office will seek corrective action. Two of the whistleblower advocates we spoke with said that it is generally beneficial to the complainant to wait for OIG or DOJ-OPR to complete their investigations so that these offices can obtain a complete factual record, which is helpful if the complainant pursues his or her case with OARM. However, according to these whistleblower advocates, if the complainant is not satisfied with the investigating office’s progress, the complainant may prefer to go directly to OARM. Regulatory requirements to provide periodic status updates and receive the complainant’s approval for an extension when investigations are running long helps ensure complainants have the information they need to make this decision. More broadly, regular status updates provide reassurance to complainants during the investigative process. Four of the eight whistleblower advocates and attorneys we spoke with said that regular status updates reassure complainants that the investigating office is continuing to make progress on their complaints. Further, six of the attorneys and advocates said that, without regular status updates, complainants can become discouraged and develop a negative view of the process. Five of these attorneys and advocates said that, as a result of these negative experiences, potential whistleblowers may be less likely to come forward to report wrongdoing. At the time the case files we reviewed were open, OIG and DOJ-OPR did not have oversight mechanisms in place to ensure compliance with the status update and extension requirements. According to senior OIG officials and a DOJ-OPR official responsible for managing these complaints, managers regularly discussed individual complaints with the investigator assigned to the complaint, but the investigator was responsible for setting due dates to ensure compliance with the regulations. The OIG and DOJ-OPR officials we spoke with said that their investigators were frequently in communication with complainants, but these communications were not always documented within their case files. Without documentation of these communications, managers could not verify that investigators had communicated with complainants, as required. In addition, senior OIG officials and the DOJ-OPR official said that they maintained information on the dates whistleblower retaliation complaints were opened and closed within their case management systems; however, these systems were not specific to whistleblower retaliation complaints and did not contain dates of interim communications. As a result, managers could not use these systems to oversee investigators’ compliance with requirements to provide status updates within prescribed time frames or obtain the complainant’s approval for an extension, if required. OIG has taken steps to begin tracking compliance with these requirements; however, DOJ-OPR has not yet taken similar action. Specifically, in July 2014, during the course of our review, an OIG manager informed staff responsible for these complaints of the importance of documenting status updates within case files to ensure documentation of OIG’s compliance with regulatory requirements to update complainants within prescribed time frames. Further, over the last 2 years, OIG has developed a database it now uses as a management tool to oversee investigators’ compliance with requirements for communicating with complainants. According to senior OIG officials we spoke with, OIG decided to develop this database to help ensure that OIG meets its regulatory requirements. OIG managers use the database to track dates of interim communications, such as status updates, and the database calculates regulatory deadlines for subsequent updates and for closing the complaint. In addition, according to senior OIG officials, managers can use the database to run reports, such as to see upcoming deadlines for all open complaints. Although it is too soon to tell how effective this database will be, if used consistently, this database could help OIG managers ensure investigators communicate with complainants in accordance with regulatory requirements. According to a DOJ-OPR official responsible for managing these complaints, DOJ-OPR could place an even greater emphasis on the deadlines for these complaints and take additional steps to oversee communications with complainants. This official stated that DOJ-OPR investigators may lose track of deadlines for status updates in FBI whistleblower retaliation cases because similar requirements are not in place for other cases DOJ-OPR typically handles. Further, as discussed previously, in many of the case files we reviewed we did not see evidence of communication between the DOJ-OPR investigator and the complainant within required time frames. For example, in one case file we reviewed, the complainant listed numerous attempts to contact DOJ-OPR over the prior year and expressed frustration at not receiving the required status updates. According to senior DOJ-OPR officials, DOJ-OPR has taken some steps to improve its management of whistleblower retaliation cases, but does not track investigators’ compliance with specific regulatory requirements and does not have a formal oversight mechanism to do so. In the last year and a half, DOJ-OPR managers have started to receive weekly reports with information on all open complaints, according to a DOJ-OPR official responsible for managing these complaints. However, the official said that the reports do not contain information on status updates. A senior DOJ-OPR official reported that DOJ-OPR is in the initial stages of upgrading its case management system, and DOJ-OPR officials expect that the new system could eventually be tailored to allow them to capture additional information on the office’s handling of FBI whistleblower retaliation complaints, such as the dates of communications between investigators and complainants. Standards for Internal Control in the Federal Government calls for agencies to conduct ongoing monitoring in the course of normal operations, such as when investigating whistleblower retaliation complaints, to help managers ensure compliance with applicable regulations and achieve desired results. DOJ-OPR has begun taking steps to upgrade its case management system but is very early in this process. As DOJ-OPR upgrades its case management system, tailoring the system to capture data specific to FBI whistleblower retaliation complaints, or developing some other mechanism, could provide DOJ- OPR managers and investigators information necessary to track compliance with regulatory requirements. Further, using that information to conduct ongoing monitoring of DOJ-OPR attorneys’ compliance with regulatory requirements could help DOJ-OPR ensure complainants receive the periodic updates that they are entitled to and that they need to determine next steps for their complaints. OIG has not informed complainants before closing complaints it declines to investigate and has not always communicated the reasons for its decision not to investigate because, according to senior OIG officials, OIG does not view the regulations as requiring them to do so. Specifically, these officials said that the regulations state that the office must provide the complainant with a written statement that indicates the office’s intention to close the complaint when the investigating office decides to terminate an investigation. As a result, according to these officials, this provision does not apply if OIG declines the complaint before initiating an investigation. Similarly, these officials said that OIG does not view the requirement to send a final termination report including a summary of relevant facts and the reasons for terminating an investigation as applying to complaints OIG declines to investigate. Unlike in OIG’s process, a DOJ-OPR official responsible for managing these complaints said that DOJ-OPR provides a draft report to the complainant when DOJ-OPR decides to close a complaint, including when DOJ-OPR makes this decision without initiating an investigation. We found that OIG provided a proposed termination report including the factual findings and conclusions that justified terminating the investigation before OIG finalized its decision to close the complaint in 8 of the 9 complaints OIG investigated. In addition, we found that OIG sent the complainant a final termination report when OIG terminated most of these investigations (7 of 8). Further, OIG generally included information required under the regulations, such as a summary of relevant facts, OIG’s reasons for terminating in the investigation, and a response to the complainant’s comments, in these final termination reports. We found that OIG did not send a proposed termination report in any of the 27 complaints OIG declined to investigate, in accordance with OIG’s interpretation of the regulations. In addition, although OIG sent a final termination report in most of the complaints (25 of 27, or 93 percent) OIG declined to investigate, OIG did not always include the reasons for its decision in the report. Specifically, we found that in 15 of the 24 final termination reports (63 percent) we reviewed for complaints OIG declined to investigate, OIG did not clearly explain the reasons for this decision. Seven of these 15 reports indicated that OIG found that the complaint did not meet threshold regulatory requirements under the FBI whistleblower regulations, but the report did not communicate why. For example, in one instance, OIG’s report to the complainant explained the general finding that the allegations, even if accepted as true, did not demonstrate a personnel action in retaliation for a protected disclosure. Information we reviewed elsewhere in this case file specified that OIG found that the complainant had not made the underlying disclosure to a designated entity under the regulations. However, OIG did not include this information in its final report to the complainant. In the 8 other complaints, OIG’s final report to the complainant stated that another office should review the complaint, such as the FBI Inspections Division, but did not indicate the reason for this decision. In particular, the report did not indicate that OIG had considered the complaint as a whistleblower retaliation matter and determined the complaint did not meet threshold regulatory requirements for OIG to conduct an investigation. In contrast, DOJ-OPR generally provided complainants proposed termination reports before closing their complaints and included required information in its final termination reports, including in complaints DOJ- OPR closed without conducting an investigation. Specifically, we found that DOJ-OPR sent a proposed termination report in 17 of 19 complaints (89 percent) that DOJ-OPR terminated, and included the office’s findings and conclusions that justified terminating the investigation in all 17 of these reports. In addition, we found that DOJ-OPR sent a final termination report in all 19 complaints and included relevant facts and the reasons for terminating the investigation in all 19 of these reports. Further, in all 9 complaints in which the complainant provided comments on the proposed termination report, DOJ-OPR responded to the complainant’s comments in the final termination report. Providing the complainant a proposed termination report describing the investigating office’s findings and conclusions ensures that the complainant is aware of the office’s rationale for the decision and has an opportunity to provide additional information or written comments before the office closes the complaint. According to two senior OSC officials we spoke with about their process for reviewing whistleblower retaliation complaints for most federal employees, OSC provides the complainant a letter when OSC intends to close a complaint that does not meet threshold requirements without conducting an investigation. In some instances, according to these officials, the complainant’s response to OSC’s proposed termination report has caused OSC to reconsider its initial decision to terminate the complaint. As with OIG and DOJ-OPR, if OSC intends to terminate a whistleblower retaliation investigation, OSC is required to provide the complainant a written statement including the facts and OSC’s conclusions and provide the complainant an opportunity to provide comments. As previously discussed, OIG and DOJ-OPR are required to provide for the enforcement of whistleblower protection in a manner consistent with certain OSC processes. In addition, the requirement to provide specific information in the office’s final report to the complainant, including the basis for the office’s decision to close the complaint, helps ensure that complainants have the information they need to make decisions about their complaints. As discussed previously, the regulations provide complainants the option of bringing their complaints to OARM after the investigating office has notified them that it has closed the complaint. information on the reasons for OIG’s decision to decline to investigate, complainants may not have sufficient information to determine if they would like to continue to pursue their complaints through OARM. Further, the regulations require complainants to bring their complaints to OARM within 60 days of receiving notification from the investigating office. If complainants need to request additional information from OIG, such as the rationale for OIG’s decision, they may not have sufficient time to bring their complaints to OARM. Officials with both OIG and OARM told us that OIG’s decision not to investigate a complaint is sufficient for the complainant to have met the requirement to bring the complaint to an investigating office—either OIG or DOJ-OPR—before filing it with OARM. forward. We believe that, if implemented effectively, these planned actions will help OIG ensure that all complainants have an opportunity to provide additional information or written comments before OIG closes their complaints and that complainants will receive the information they need to make decisions about their complaints. Whistleblowers play an important role in safeguarding the federal government against waste, fraud, and abuse, but they often risk retaliation from their employers as a result of their actions. DOJ has established a process by which FBI whistleblowers can seek recourse should they experience such retaliation, and DOJ generally has the discretion to revise this process, as needed. We found that DOJ has terminated many FBI whistleblower complaints based on complainants’ failure to meet threshold regulatory requirements rather than whether the retaliation occurred. In particular, FBI employees are protected if they report wrongdoing to certain high-level FBI or DOJ officials and other specified entities, and—unlike employees of other executive branch agencies—are not protected if they report wrongdoing to their supervisors. DOJ officials have stated plans to partially address this by adding several more senior officials in FBI field offices to the list of individuals to whom complainants may report protected disclosures, but the timing and outcome of this stated plan are uncertain. DOJ officials said they do not plan to expand the list to include supervisors or others in an employee’s chain of command in part because of their concerns about the additional resources that would be needed to handle a possible increase in complaints and the potential effect on the timeliness of DOJ’s process to handle these complaints. While DOJ officials’ concern about timeliness is important, they are already taking other steps to improve the efficiency of this process. More importantly, dismissing retaliation complaints made to an employee’s supervisor or someone in that person’s chain of command leaves some FBI whistleblowers with no recourse if they allege retaliation, as our review of case files demonstrated. Training that DOJ officials plan to provide to FBI employees could help provide information on how to make a protected disclosure; however, this planned training will not address the fact that some employees report alleged wrongdoing first to their supervisors or others in their chain of command without ever expecting that this will lead to retaliation and a whistleblower claim. As a result, congressional consideration of whether the purposes of 5 U.S.C. § 2303, which prohibits a personnel action taken against an FBI employee as a reprisal for a protected disclosure, are being met—in particular, whether FBI employees should, like employees of other executive branch agencies, have a means to obtain corrective action for retaliation for disclosures of wrongdoing made to supervisors and others in the employees’ chain of command—could help ensure that DOJ’s process for handling these complaints is consistent with congressional action to strengthen and expand protections for other federal whistleblowers. Further, it is important that, regardless of what changes DOJ may make to the list of entities designated to receive protected disclosures, information DOJ and the FBI provide to FBI employees on the process for making a protected disclosure is clear and consistent so FBI employees who consult such guidance make decisions based on accurate information. In some instances—particularly where OARM ordered corrective action in favor of the complainant—the process for resolving these complaints has taken many years, and DOJ has stated a commitment to improving its efficiency in handling these cases. Committing to specific time frames for returning DOJ decisions on the outcomes of FBI whistleblower retaliation cases could help DOJ achieve its commitment to improving efficiency in handling these complaints. Additionally, assessing the impacts of DOJ actions to improve timeliness could help ensure that these actions are achieving the intended results. Finally, establishing an oversight mechanism to monitor DOJ-OPR investigators’ compliance with regulatory reporting requirements—either by tailoring DOJ-OPR’s case management system or another means—can assist DOJ in ensuring that complainants receive timely information they need to make informed decisions regarding their complaints, such as whether or not to seek corrective action from OARM. To ensure that the purposes of 5 U.S.C. § 2303—which prohibits a personnel action taken against an FBI employee as a reprisal for a protected disclosure—are met, Congress may wish to consider whether FBI employees should have a means to obtain corrective action for retaliation for disclosures of wrongdoing made to supervisors and others in the employee’s chain of command who are not already designated officials. We recommend the following four actions. To better ensure that FBI whistleblowers have access to recourse under DOJ’s regulations should the individuals experience retaliation, and to minimize the possibility of discouraging future potential whistleblowers, we recommend that the Attorney General clarify in all current relevant DOJ guidance and communications, including FBI guidance and communications, to whom FBI employees may make protected disclosures and, further, explicitly state that employees will not have access to recourse if they experience retaliation for reporting alleged wrongdoing to someone not designated in DOJ’s regulations. To better ensure that DOJ is fulfilling its commitment to improving efficiency in handling these complaints, we recommend the following to the heads of the relevant offices: OARM and ODAG should provide parties with an estimated time frame for returning each decision, including whether the complaint meets threshold regulatory requirements, merits, and appeals. If the time frame shifts, OARM and ODAG should timely communicate a revised estimate to the parties. DOJ-OPR, OIG, OARM, and ODAG should jointly assess the impact of ongoing and planned efforts to reduce the duration of FBI whistleblower retaliation complaints throughout the entire investigation, adjudication, and appeal process to ensure that these changes are in fact shortening total complaint length, without sacrificing quality. To ensure that complainants receive the periodic updates that they are entitled to and need to determine next steps for their complaint, such as whether or not to seek corrective action from OARM, we recommend that Counsel, DOJ-OPR tailor its new case management system or otherwise develop an oversight mechanism to capture information on the office’s compliance with regulatory requirements and, further, use that information to monitor and identify opportunities to improve DOJ-OPR’s compliance with regulatory requirements. We provided a draft of this report to DOJ and OIG for review and comment. On January 16, 2015, an official with DOJ’s Justice Management Division sent us an email stating that the department concurred with our recommendations. DOJ also provided technical comments which we incorporated, as appropriate. In its technical comments, DOJ stated a commitment to monitoring the implementation of its April 2014 recommendations to ensure that FBI employees are not unfairly excluded from whistleblower protection because they had disclosed information to their immediate supervisor. DOJ also reported that DOJ-OPR is taking steps, such as developing a report template and upgrading its case management system which, when completed, could help the agency begin systematically tracking investigators’ compliance with regulatory reporting requirements. These initial steps position the agency to satisfy our recommendation that DOJ-OPR tailor its new case management system or otherwise develop an oversight mechanism to capture information on the office’s compliance with regulatory reporting requirements. In written comments provided by OIG, (reproduced in app. IV) the Inspector General concurred with our recommendation to OIG and provided technical comments which we incorporated, as appropriate. In its comment letter, OIG stated that OIG has consistently supported and continues to support broadening the list of persons to whom protected disclosures can be made. Further, with regard to guidance provided to FBI employees, the OIG fully supports providing clear and comprehensive guidance as to all aspects of whistleblower rights and protections. To this end, OIG’s letter stated that the office is working with the FBI to create a specialized training program that highlights the specific requirements and procedures for FBI whistleblowers and on enhancements to OIG’s website to include additional information specific to FBI employees. The OIG letter also raised several additional issues. First, OIG’s letter stated that, with regard to the total duration of Jane Turner’s complaint, for example, the GAO draft does not distinguish between the responsibilities of OIG and the department. We appreciate the differing roles and responsibilities of each office and describe these in our report. In reporting our findings, we clearly distinguish between the separate offices’ timeframes and records of compliance with certain regulatory requirements. However, it is important for us to also consider the total length of cases, which is particularly important to the whistleblowers. Second, the OIG letter mentioned that GAO’s analysis excluded more recent complaints. Given the sensitive nature of open cases, we reviewed only complaints closed as of December 31, 2013. Third, the OIG letter commented that the GAO report failed to fully acknowledge the high priority and personal attention OIG senior staff give to FBI whistleblower retaliation matters. We disagree. Our report explains that the Inspector General personally reviews each complaint, but also recognizes that competing priorities for this high level of attention has resulted in delays. Fourth, OIG’s letter noted that in many instances OIG has relied on telephone contact with complainants to meet regulatory notification requirements and because such contacts were not consistently documented, we would not always have identified them in our case file review. In our review of both DOJ and OIG case files, we noted all evidence of contact with the complainants, including evidence of written and oral communication, but it is correct that we would not have identified undocumented contact with complainants. As OIG acknowledged in its letter, it is important that evidence of contact be documented in case files to demonstrate compliance with the regulations. As discussed in our report, OIG has taken steps to address this. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Attorney General, the DOJ Inspector General, appropriate congressional committees, and other interested Member of Congress. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This appendix discusses in detail our methodology for addressing the following three objectives: determining how long the Department of Justice (DOJ) has taken to resolve Federal Bureau of Investigation (FBI) whistleblower retaliation complaints and what factors have affected these time frames, determining the extent to which DOJ has taken steps to resolve complaints more quickly and determine the impact of any such efforts; and determining the extent to which DOJ’s Office of the Inspector General (OIG) and Office of Professional Responsibility (DOJ-OPR) have complied with regulatory reporting requirements. To determine how long DOJ has taken to resolve FBI whistleblower retaliation complaints and the factors that affected these time frames, we reviewed DOJ case files for all FBI whistleblower retaliation complaints closed within the last 5 calendar years (from 2009 through 2013). Specifically, we reviewed the case files for a total of 62 closed whistleblower retaliation complaints to calculate the duration of each complaint from initial filing to DOJ’s final decision, including, for example, the length of time from initial filing to the investigating office’s final decision; the length of time from filing a request for corrective action with the Office of Attorney Recruitment and Management (OARM) to OARM’s decision; and the length of the appeals process. We did this by creating a data collection instrument to identify the key characteristics of whistleblower retaliation cases, determine the completeness of the files, and assess time frames for each case in accordance with DOJ’s regulations. We also gathered information on the outcome of each complaint and factors that could affect timeliness, such as the length and frequency of parties’ requests for extensions of time. In addition, to better understand DOJ’s process for handling these complaints, we reviewed relevant documentation, including DOJ’s whistleblower regulations and internal guidance on the process for making a protected disclosure. To obtain DOJ officials’ perspectives on DOJ’s process, time frames for handling these complaints, and factors affecting these time frames, we also interviewed senior agency officials from offices responsible for investigating—OIG and DOJ-OPR—or adjudicating—OARM and the Office of the Deputy Attorney General (ODAG)—FBI whistleblower retaliation complaints. We compared aspects of DOJ’s process against standards in Standards for Internal Control in the Federal Government to identify the extent to which DOJ’s process was in alignment with these standards. Because of the sensitivity of FBI whistleblowers’ identities, to obtain whistleblower perspectives about DOJ’s process and time frames, we met with representatives of whistleblower advocacy groups knowledgeable about DOJ’s process and attorneys who have represented FBI whistleblowers through this process. Specifically, we identified and interviewed representatives of five whistleblower advocacy groups using an iterative process often referred to as snowball sampling. At each interview, we solicited names of additional groups to interview and selected for interviews those that were most widely recognized as knowledgeable about DOJ’s process. We also interviewed attorneys who had represented FBI whistleblowers in three of five cases where complainants have alleged retaliation and obtained corrective action. These attorneys discussed their experience with DOJ’s process and factors affecting the length of their cases. We analyzed the results of all of these interviews to distill themes and patterns. The information we gathered from these groups and attorneys—referred to throughout our report collectively as eight whistleblower advocates and attorneys—is not generalizable, but provides perspectives on whistleblowers’ experiences with DOJ’s process. To determine the extent to which DOJ has taken steps to resolve complaints more quickly, we interviewed senior DOJ officials in each of the four offices responsible for investigating or adjudicating whistleblower retaliation complaints—OIG, DOJ-OPR, OARM, and ODAG. We asked about the factors that affect the timely processing of these complaints and any efforts to address them. In addition, to identify any practices that have improved timeliness in comparable federal settings, we interviewed senior officials in the Department of Defense’s Office of the Inspector General as well as the U.S. Office of Special Counsel (OSC) and the U.S. Merit Systems Protection Board (MSPB)—federal agencies that handle whistleblower retaliation complaints for other federal employees—about those agencies’ processes for handling whistleblower retaliation complaints. To identify the extent to which DOJ officials have taken steps to determine the impact of their efforts to improve timeliness, we interviewed DOJ officials and reviewed DOJ’s April 2014 report to the President and compared DOJ’s stated plans with standards in Standards for Internal Control in the Federal Government. To determine the extent to which OIG and DOJ-OPR have complied with regulatory reporting requirements, we compared evidence we saw in DOJ’s case files with DOJ’s regulations and analyzed the extent of any discrepancies. Specifically, for each case file, we reviewed OIG’s and DOJ-OPR’s documented communications with the complainants, including initial and ongoing outreach, and recorded the dates of all communications in our data collection instrument. We calculated the length of time between all documented communications to determine the number of complaints in which OIG and DOJ-OPR complied with the deadlines for reporting requirements in DOJ’s regulations. In addition, we reviewed the content of the investigating office’s final notice to the complainant that the office had closed its investigation or declined to open an investigation, as applicable, as well as the content of any interim notices stating the office’s decision. We compared the content of these communications with DOJ’s regulatory requirements. We also reviewed documentation and interviewed OIG and DOJ-OPR officials responsible for handling these complaints about any oversight mechanisms to ensure compliance with regulatory requirements. For example, we reviewed an electronic copy of an OIG spreadsheet for tracking regulatory deadlines for these complaints. We then compared these mechanisms against standards in Standards for Internal Control in the Federal Government to determine the extent to which OIG and DOJ-OPR met the relevant standards related to oversight. Further, we interviewed eight whistleblower advocates and attorneys, as noted above, to obtain whistleblower perspectives on the extent of DOJ’s compliance with regulatory requirements and the effects of this compliance. In addition, because OSC serves a function comparable to those of OIG and DOJ- OPR in handling whistleblower complaints for most other executive branch employees and has similar regulatory reporting requirements, we interviewed OSC officials about OSC’s processes and mechanisms for ensuring compliance with its requirements. We conducted this performance audit from September 2013 to January 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides information on the Department of Justice’s (DOJ) reasons for closing the 62 Federal Bureau of Investigation (FBI) whistleblower retaliation complaints we reviewed. These 62 complaints represent the universe of FBI whistleblower retaliation complaints that were closed within the last 5 calendar years (from 2009 through 2013) by the final DOJ office to review the complaint. We reviewed case files at both of the offices responsible for investigating these complaints—the Office of the Inspector General (OIG) and the Office of Professional Responsibility (DOJ-OPR)—as well as at the office responsible for adjudicating these complaints—the Office of Attorney Recruitment and Management (OARM)—and identified the final outcome in each complaint. The DOJ office reviewing a whistleblower retaliation complaint may close the complaint before conducting an investigation (in the case of OIG and DOJ-OPR) or considering the merits of the complaint (in the case of OARM) if the office determines that the complaint does not meet threshold requirements under the FBI whistleblower regulations. If the investigating office finds that a complaint meets threshold regulatory requirements, the office will open an investigation to determine if there are reasonable grounds to believe that a personnel action had been taken or will be taken in retaliation for a protected disclosure. If OARM first determines that a complaint meets threshold requirements, OARM adjudicates the complaint to determine whether the disclosure was a contributing factor in the personnel action based on a preponderance of the evidence and whether the FBI has demonstrated by clear and convincing evidence that it would have taken the same personnel action in the absence of such disclosure. If the complaint is substantiated and the FBI is unable to meet its burden of proof, OARM will order that the FBI take appropriate corrective action. In addition complainants may voluntarily withdraw their complaints. Table 1 summarizes the final outcome of the 62 complaints we reviewed, sorted by the final DOJ office to review the complaint and the overall length of the complaint. In addition to determining the final outcome in each complaint, we reviewed the case files to determine the reasons for the final DOJ office’s decision to close the complaint. For example, in some complaints, the final office determined that the complaint did not meet threshold regulatory requirements because the complainant’s underlying disclosure had been made to an individual or entity not designated in the regulations and therefore the disclosure was not protected. In other complaints, the investigating office found that there were not reasonable grounds to believe the personnel action had been taken in reprisal for a protected disclosure because the evidence indicated that the personnel action would have been taken in the absence of the disclosure. Table 2 summarizes the reasons DOJ offices cited in their case files as reasons for closing whistleblower retaliation complaints and the number of complaints in which the final DOJ office to review the complaint cited each. Requirement Acknowledgment of complaint: The investigating office must notify the complainant that it has received the complaint and provide the name of a contact person within the office within 15 days of either OIG or DOJ-OPR receiving the complaint. First status update: The investigating office must provide the complainant with the first status update within 90 calendar days of acknowledging receipt of the complaint. Subsequent status updates: The investigating office must provide the complainant with a status update at least every 60 calendar days after the first status update. (OIG) 44 percent (16 of 36) (DOJ-OPR) 19 percent (4 of 21) 82 percent (14 of 17) 65 percent (13 of 20) 33 percent (4 of 12) 20 percent (3 of 15) Overall timeliness: The investigating office must determine within 240 days of receiving the complaint if there are reasonable grounds to believe whistleblower retaliation occurred, unless the complainant agrees to an extension. 84 percent (31 of 37) 80 percent (16 of 20) 81 percent (30 of 37) 14 percent 50 percent (10 of 20)e.f 60 percent (1 of 7) (6 of 10) In 1 DOJ-OPR complaint we reviewed, the complainant initially provided an incorrect address and DOJ-OPR sent both a proposed and final termination report to the incorrect address within 240 days, but closed the complaint after more than 240 days because of the time needed to obtain the correct address. We excluded that complaint from our analysis of the number of complaints that met this requirement. In 1 OIG complaint and 1 DOJ-OPR complaint, the case file did not contain documentation that the complainant agreed to an extension, but did contain evidence of ongoing communication between the complainant or complainant’s attorney and the investigating office after the 240-day deadline. We counted these 2 complaints as meeting the requirement. In addition to the contact named above, Eileen Larence (Director), Dawn Locke (Assistant Director), Claudia Becker (Analyst-in-Charge), Vanessa Dillard, Michele Fejfar, Eric Hauswirth, Susan Hsu, Tom Lombardi, Signora May, Erin McLaughlin, Linda Miller, Jan Montgomery, and Janet Temko-Blinder made key contributions to this report. | Whistleblowers help safeguard the federal government against waste, fraud, and abuse—however, they also risk retaliation by their employers. For example, in 2002, a former FBI agent alleged she suffered retaliation after disclosing that colleagues had stolen items from Ground Zero following the September 11, 2001, terrorist attacks. DOJ found in her favor over 10 years after she reported the retaliation. GAO was asked to review DOJ's process for handling such complaints. GAO examined (1) the time DOJ took to resolve FBI whistleblower retaliation complaints, (2) the extent to which DOJ took steps to resolve complaints more quickly, and (3) the extent to which DOJ complied with certain regulatory reporting requirements. GAO reviewed all DOJ case files for FBI whistleblower retaliation complaints DOJ closed from 2009 to 2013, and interviewed whistleblower attorneys, advocates, and government officials about the complaint process. The interview results are not generalizable. The Department of Justice (DOJ) closed 44 of the 62 (71 percent) Federal Bureau of Investigation (FBI) whistleblower retaliation complaints we reviewed within 1 year, took up to 4 years to close 15 complaints, and took up to 10.6 years to close the remaining 3. DOJ terminated 55 of the 62 complaints (89 percent) and awarded corrective action for 3. (Complainants withdrew 4.) We found that DOJ terminated many (48 of 62) complaints we reviewed because they did not meet certain regulatory requirements. For example, DOJ terminated at least 17 complaints in part because a disclosure was made to someone in the employee's chain of command or management, such as a supervisor, who was not one of the nine high-level FBI or DOJ entities designated under DOJ regulations to receive such disclosures. Unlike employees of other executive branch agencies, FBI employees do not have a process to seek corrective action if they experience retaliation based on a disclosure of wrongdoing to their supervisors or others in their chain of command who are not designated officials. This difference is due, in part, to DOJ's decisions about how to implement the statute governing FBI whistleblowers. In 2014, DOJ reviewed its regulations and, in an effort to balance competing priorities, recommended adding more senior officials in FBI field offices to the list of designated entities, but did not recommend adding all supervisors. DOJ cited a number of reasons for this, including concerns about the additional resources and time needed to handle a possible increase in complaints if DOJ added supervisors. However, DOJ is already taking other steps to improve the efficiency of the complaint process. More importantly, dismissing retaliation complaints made to an employee's supervisor or someone in that person's chain of command leaves some FBI whistleblowers—such as the 17 complainants we identified—without protection from retaliation. By dismissing potentially legitimate complaints in this way, DOJ could deny some whistleblowers access to recourse, permit retaliatory activity to go uninvestigated, and create a chilling effect for future whistleblowers. We also found that DOJ and FBI guidance is not always clear that FBI employees reporting alleged wrongdoing to a supervisor or someone in their chain of command may not be a protected disclosure. Ensuring that guidance always clearly explains to whom an FBI employee can report wrongdoing will help FBI whistleblowers ensure that they are fully protected from retaliation. DOJ took from 2 to 10.6 years to resolve the 4 complaints we reviewed that DOJ adjudicated, and DOJ did not provide complainants with estimates of when to expect DOJ decisions throughout the complaint process. Providing such estimates would enhance accountability to complainants and provide additional assurance about DOJ management's commitment to improve efficiency. Further, DOJ offices responsible for investigating whistleblower retaliation complaints have not consistently complied with certain regulatory requirements, such as obtaining complainants' approvals for extensions of time. One investigating office does not track investigators' compliance with specific regulatory requirements and does not have a formal oversight mechanism to do so. Effectively monitoring investigators' compliance with such requirements could help assure complainants that their cases are making progress and that they have the information they need to determine next steps for their complaints. Congress may wish to consider whether FBI whistleblowers should have means to seek corrective action if retaliated against for disclosures to supervisors, among others. Further, GAO recommends that DOJ clarify guidance to clearly convey to whom employees can make protected disclosures, provide complainants with estimated complaint decision timeframes, and develop an oversight mechanism to monitor regulatory compliance. DOJ and the Office of the Inspector General concurred with GAO's recommendations. |
Corporations can be located in tax haven countries through a variety of means, including corporate inversions, acquisition, or initial incorporation abroad. Location in a tax haven country can change a company’s tax liability because the United States taxes domestic corporations differently than it taxes foreign corporations. The United States taxes the worldwide income of domestic corporations, regardless of where the income is earned; gives credits for foreign income taxes paid; and defers taxation of foreign subsidiaries until their profits are repatriated in the form of dividends or other income. However, a U.S. parent corporation is subject to current U.S. tax on certain income earned by a foreign subsidiary, without regard to whether such income is distributed to the U.S. corporation. Through “deferral,” U.S. parent corporations are allowed to postpone current taxation on the net income or economic gain accrued by their subsidiaries. These subsidiaries are separately incorporated foreign subsidiaries of U.S. corporations. Because they are not considered U.S. residents, their profits are not taxable as long as the earnings are retained and reinvested outside the United States in active lines of business. That is, U.S. tax on such income is generally deferred until the income is repatriated to the U.S. parent. The U.S. system also contains certain anti-deferral features that tax on a current basis certain categories of passive income earned by a domestic corporation’s foreign subsidiaries, regardless of whether the income has been distributed as a dividend to the domestic parent corporation. Passive income includes royalties, interest and dividends. According to the Internal Revenue Code (I.R.C.), passive income is “deemed distributed” to the U.S. parent corporation and thus denied deferral. The rules defining the application and limits of this antideferral regime are known as the Subpart F rules. In order to avoid double taxation of income, the United States permits a taxpayer to offset, in whole or in part, the U.S. tax owed on this foreign- source income. Foreign tax credits are applied against a corporation’s U.S. tax liability. The availability of foreign tax credits is limited to the U.S. tax imposed on foreign-source income. To ensure that the credit does not reduce tax on domestic income, the credit cannot exceed the tax liability that would have been due had the income been generated domestically. Firms with credits above that amount in a given year have “excess” foreign tax credits, which can be applied against their foreign source income for the previous 2 years or the subsequent 5 years. This system of taxation of U.S. multinational corporations has been the subject of ongoing debate. Specific issues in international taxation include whether to reform the U.S. system by moving from worldwide taxation to a territorial system that exempts foreign-source income from U.S. tax. These issues have become more prominent with the increasing openness of the U.S. economy to trade and investment. The United States taxes foreign corporations on income generated from their active business operations in the United States. Such income may be generated by a subsidiary operating in the United States or by a branch of the foreign parent corporation. It is generally taxed in the same manner and at the same rates as the income of a U.S. corporation. In addition, if a foreign corporation is engaged in a trade or business in the United States and receives investment income from U.S. sources, it will generally be subject to a withholding tax of 30 percent on interest, dividends, royalties, and certain types of income derived from U.S. sources, subject to certain exceptions. This tax may be reduced or eliminated under an applicable tax treaty. For objective 1, we collected and analyzed information on government contracting practices and business decision-making processes. We also reviewed the economics literature and reports of the Department of the Treasury and the Joint Committee on Taxation to determine how differences in the tax treatment of corporations can contribute to a tax cost advantage. Using the information we obtained, we built a simple qualitative model to explain the conditions under which a tax haven company may have a tax cost advantage in competing for federal contracts relative to other companies whose headquarters are not located in tax haven countries. For a description of the model, see appendix I. For objective 2, we used the qualitative model to identify companies that had characteristics consistent with having a tax cost advantage. We matched contractor data (name and taxpayer identification numbers) from the GSA’s FPDS for 2000 and 2001 to tax and location data from the IRS’s SOI corporation file. In this matched database, we analyzed information about large corporations, those with at least $10 million in assets. We identified the large corporations with characteristics consistent with a tax cost advantage compared to other large corporations and counted the number of these advantaged and disadvantaged corporations. We divided the SOI data into categories that differentiated between federal contractors (domestically owned and foreign owned) and noncontractors (domestically owned and foreign owned). We further divided the foreign- owned corporation data by those headquartered in tax haven countries from those not headquartered in tax haven countries. SOI is a data set widely used for research purposes. SOI corporation files are representative samples of the population of all corporations that filed tax returns. Generally, SOI data can be used to project tax return information to the universe of all filers. However, the total corporations that matched in both the SOI and FPDS databases could not be used to project the results of our analysis to the universe of all corporations. Because SOI’s sampling rate for smaller corporations is very low, our matched database contained very few smaller corporations and would not lead to reliable estimates of the properties of the universe of smaller corporations. Therefore, the results of our analysis cannot be projected to the universe of all corporate filers. However, our results do represent the universe of large tax haven contractors. SOI samples corporations with at least $10 million in assets at a 100 percent rate so that the SOI sample includes the universe of these larger corporations. For this reason, we report the results of our analysis without sampling error. IRS performs a number of quality control steps to verify the internal consistency of SOI sample data. For example, it performs computerized tests to verify the relationships between values on the returns selected as part of the SOI sample and manually edits data items to correct for problems, such as missing items. We conducted several reliability tests to ensure that the data excerpts we used for this report were complete and accurate. For example, we electronically tested the data and used published data as a comparison to ensure that the data set was complete. To ensure accuracy, we reviewed related documentation and electronically tested for obvious errors. We concluded that the data were sufficiently reliable for the purposes of this report. We have previously reported that there are limitations to the accuracy of the data in FPDS. The data accuracy issues we reported on involved contract amounts and classification of contract characteristics. For this report, the only FPDS data we used were the contractors’ names and taxpayer identification numbers. Our previous report did not address the accuracy of these data elements. Therefore, our match of the FPDS and SOI data may contain some nonsampling error; that is, due to inaccurate identification numbers, we may fail, in some cases, to correctly identify large corporations in SOI that were also federal contractors. However, we expect this nonsampling error to be small, and we concluded that the data were sufficiently reliable for the purposes of this report. Contractors, including tax haven contractors, that have a lower marginal tax rate on the income from a contract than other contractors would have a tax cost advantage when competing for a contract. Furthermore, there is some evidence that a tax haven contractor may be able to shift income between the U.S. subsidiary and its tax haven parent in order to reduce U.S. taxable income. There are conditions under which a contractor could have a tax cost advantage when competing for a contract. The tax cost of the contract is the tax paid on the additional income derived from the contract. A contractor that pays less tax on additional income from a contract gains a tax cost advantage compared to companies that pay higher tax. One way to gain a tax cost advantage is by offsetting income earned on the contract with losses from other activities. The contractors with a tax cost advantage are not necessarily the successful competitors because the tax cost savings may not be reflected in actual bid prices or price proposals, and prices or costs are only one of several factors involved in awarding contracts. This reasoning holds for all contractors, including tax haven contractors, and all contracts, including federal contracts. The appropriate measure of the tax cost of the contract is the corporation’s marginal tax rate. The marginal tax rate is the rate that applies to an increment of income. As such, the marginal tax rate would be the rate that applies to the additional income that would arise from the federal contract. For example, if a contractor in a 34 percent tax bracket earns $1 million of additional income from the contract, it would owe $340,000 in additional tax. The 34 percent statutory tax rate is this contractor’s marginal rate. A lower marginal tax rate may confer a tax cost advantage when companies are bidding on contracts because it indicates a higher after-tax rate of return on the contact. All other things being equal, a lower marginal effective tax rate is equivalent to a reduction in cost, that is, a reduction in either the tax rate or cost would produce a higher after-tax return. For example, a contractor with a 30 percent marginal tax rate on a contract producing $1 million of income pays $300,000 in taxes and receives $700,000 in additional after-tax income. On the other hand, a contractor with a 34 percent marginal tax rate on the same contract producing $1 million of income pays $340,000 in taxes and receives $660,000 in additional after-tax income. The $40,000 difference in after-tax income due to the difference in marginal tax rates is the tax cost advantage. In this example, the contractor with the tax cost advantage can, in theory, underbid the competitor by as much as $40,000 and earn an after-tax income at least as large as the competitor. In this sense, the competitor with the lower marginal tax rate would have a tax cost advantage over a competitor with a higher marginal tax rate. A contractor gains a tax cost advantage if it has a lower marginal tax rate compared to other companies that are competing for the contract. However, the available data are not sufficient to measure marginal rates accurately. In order to compute marginal rates, detailed information is required about the tax status of the contractors and types of spending by the contractors associated with the contracts. Although the marginal tax rates are not available, conditions under which the marginal rates may be lower for some companies than others can be inferred from their current taxable income. Specifically, a company that has positive taxable income may be more likely to have a positive tax liability on the incremental income from the contract than companies with zero or negative taxable income. Therefore, a company with zero taxable income may have a lower marginal tax rate relative to companies with positive taxable income. Tax losses in the United States on other activities could absorb incremental income generated from a contract. All other things being equal, a company competing for a federal contract that reported taxable income in the United States would face a higher tax cost than a competitor without taxable income. While a zero tax liability provides an indicator of a tax cost advantage, it does not necessarily mean that the advantage exists. Whether a contractor with zero tax liability has a tax cost advantage when competing for a particular contract depends on the tax liabilities of the other competitors. The contractor with zero tax liability would have no tax cost advantage if all the other competitors also had no tax liability. Even if a contractor can be shown to have a tax cost advantage when competing for a federal contract, this advantage does not imply that the contractor’s bid or proposal will be successful. A tax cost advantage may not be reflected in the contractor’s bid or price proposal, the content of which depends on the business judgment of the contractor. For example, in order to include more profit, a contractor may decide not to use any tax cost advantage to reduce its price. Even if the tax advantage is reflected in the bid or price proposal, other price or cost factors that affect whether the bid or proposal is successful may not be equal across the companies competing for the contract. For example, a bidder may have a tax cost advantage over other bidders, but if its costs of labor and material are higher, its tax cost advantage may be offset by its higher costs for those other elements of its bid. Further, where price or cost is not the only evaluation factor for award of the contract, any tax cost advantage may be offset by the relative importance of other factors such as technical merit, management approach, and past performance. Generally, the contractor’s tax cost advantage would become a competitive advantage where other contractors would have to reduce their prices (or costs) and/or improve the nonprice (or noncost) elements of their proposals to offset the tax cost advantage. Tax haven contractors may be more likely to have lower tax costs than other contractors because they may be able to shift U.S. source income to their tax haven parents, reducing U.S. taxable income. Some, but not all, domestic contractors - those that have overseas affiliates - may also be able to shift income. Any income earned by the U.S. subsidiary from a contract for services performed in the United States would be U.S. taxable income. Such income would be taxed in the United States unless it is shifted outside the United States through such techniques as transfer pricing abuse. Location in a tax haven country can confer tax advantages that are not related to income shifting and do not give a company an advantage when competing for federal contracts. When a parent locates in a tax haven country, taxes on foreign income can be reduced by eliminating U.S. corporate-level taxation of foreign operations. However, these tax savings are unrelated to the taxes paid on income derived from the contract for services performed in the United States and have no effect on the tax cost of the contract. The tax haven contractor potentially gains an advantage with respect to contract competition because of the increased scope for income shifting to reduce U.S. taxable income below zero. A tax haven contractor may be able to shift income outside of the United States by increasing payments to foreign members of the corporate group. The contractor may engage in transfer pricing abuse, whereby related parties price their transactions artificially high or low to shift taxable income out of the United States. For example, the tax haven parent can charge excessive prices for goods and services rendered (for example, $1000 instead of $500). This raises the subsidiary’s expenses (by $500), lowers its profits (by $500), and shifts the income ($500) to the lower tax jurisdiction outside the United States. Transfer pricing abuse can also occur when the foreign parent charges excessive interest on loans to its U.S. subsidiary. Interest deductions can also be used to shift income outside the United States through a technique called “earnings stripping.” Using this technique, the foreign parent loads the U.S. subsidiary with a disproportionate amount of debt, merely by issuing an intercompany note, thereby generating interest payments to the parent and interest deductions against U.S. income for the subsidiary. However, the U.S. subsidiaries would still be subject to the I.R.C. rules that limit the deductibility of interest to 50 percent of adjusted taxable income whenever the U.S. subsidiary’s debt-equity ratio exceeds 1.5 to 1. Determining whether companies shift income to obtain a tax cost advantage is difficult because differences among companies that may indicate shifting can also be explained by other factors affecting costs and profitability. For example, while differences in average tax rates and interest expenses may be consistent with income shifting, they do not prove that such activities are occurring. The differences might be explained by other factors, such as the age of the company. As table 1 shows, tax haven contractors in 2001 had greater interest expense and lower tax liabilities relative to gross receipts than domestic or all foreign contractors. The greater interest expense associated with lower tax liabilities may indicate that the tax haven contractors have used techniques like earnings stripping to shift taxable income outside the United States. The pattern of tax liabilities and interest expense in 2000 is the same as in 2001 in all respects except one: the ratio of interest expense to gross receipts for tax haven noncontractors is lower than the ratio for domestic or all foreign contractors in 2000. (For details, see app. II.) This pattern of interest expenses and tax liabilities is largely consistent with tax haven contractors inflating interest costs to shift taxable income outside of the United States but does not prove that this has occurred. The differences may be due to such factors as the age and industry of the companies, their history of mergers or acquisitions, and other details of their financial structure and the markets for their products. Furthermore, low or zero tax liability is not necessarily an indicator of noncompliance. Companies may have low or zero tax liabilities for a variety of reasons, such as overall business conditions, industry- or company-specific performance issues, and the use of income shifting. The evidence on the extent to which income shifting is occurring is not precise. Studies that compare profitability of foreign-controlled and domestically controlled companies show that much of the difference can be explained by factors other than income shifting. However, the range of estimates can be wide, contributing to uncertainty about the precise effect, and the studies do not focus on income shifting to parents in tax haven countries. The 1997 study by Harry Grubert showed that more than 50 percent, and perhaps as much as 75 percent, of the income differences could be explained by factors other than income shifting. A Treasury report on corporate inversions did discuss income shifting to parents in tax haven countries but did not provide any quantitative estimates of the extent of such shifting. According to the report, the tax savings from income shifting are greatest in the case of a foreign parent corporation located in a no-tax jurisdiction. The Treasury report cites increased benefits from income shifting among other tax benefits as a reason for recent corporate inversion activity and increased foreign acquisitions of U.S. multinationals. Using tax liability as an indicator of ability to offset contract income, we determined that large tax haven contractors were more likely to have a tax cost advantage than large domestic contractors in both 2000 and 2001. In both years, tax haven contractors were about one and a half times more likely to have no tax liability as domestic contractors. As table 2 shows, in 2000, 56 percent of the 39 tax haven contractors reported no tax liability, while 34 percent of the 3,253 domestic contractors reported no tax liability. In 2001, 66 percent of the 50 tax haven contractors and 46 percent of the 3,524 domestic contractors reported no tax liability. Under the conditions of our model, contractors with no tax liability would have a tax cost advantage compared to the contractors that did have tax liabilities in these years. Consequently, in 2000, the tax haven contractors without tax liabilities were likely to have a tax cost advantage compared to the 17 other tax haven contractors and 2,132 domestic contractors that had tax liabilities. The 1,121 domestic contractors without tax liabilities were also likely to have a tax cost advantage compared to these same companies. In 2001, the tax haven contractors with zero tax liability were likely to have a tax cost advantage compared to the 17 other tax haven contractors and 1,888 domestic contractors that had tax liabilities. Because they reported no tax liability, 1,636 domestic contractors were also likely to have a tax cost advantage with compared to these same companies. This analysis of possible tax advantages does not show that income shifting is the only potential cause of the advantage. As mentioned above, the tax losses that confer the advantage may be due to income shifting, but may also be due to other factors such as overall business conditions, industry and age of the company, or company-specific performance issues. In addition, the analysis does not show the size of the advantage in terms of tax dollars saved. The amount saved depends, in part, on the amount of additional income from the contract. If the contractor with no tax liability has insufficient losses to offset the additional income, it would pay taxes on at least part of the income, reducing the potential advantage. Lastly, the analysis identifies tax haven contractors that meet the conditions for having a tax cost advantage with respect to income from the contract in 2000 and 2001. The data do not indicate whether they have an overall tax cost advantage on a contract that produces income in other years. Furthermore, to the extent that losses are used to offset income in the current year, they cannot be used to offset income in other years. These smaller loss carryovers would reduce the overall tax cost advantage. The existence of a tax cost advantage for some tax haven contractors matters to American taxpayers. First, the advantage could, but does not necessarily, affect which company wins a contract. A contractor with a tax cost advantage could offer a price that wins a contract based more on tax considerations than on factors such as the quality and cost of producing goods and services. Second, the potential tax cost advantage may contribute, along with other tax considerations, to the incentives for companies to move to tax haven countries, reducing the U.S. corporate tax base. The issue of tax cost advantages for tax haven contractors is related to the larger issue of how companies headquartered or operating in the United States should be taxed. For example, the questions about how the worldwide income of U.S. multinational corporations should be taxed are part of a larger debate and beyond the scope of this report. Because of these larger policy issues, we are not making recommendations in this report. In a letter dated June 22, 2004, the IRS Commissioner stated that because IRS’s only role in our report was to provide us with certain tax data, IRS’s review of a draft of this report would be limited to evaluating how well we described the tax data it provided. The Commissioner stated that IRS believes that the report fairly describes these data. On June 28, officials from the Department of the Treasury’s Office of Tax Policy provided oral comments on several technical issues, which we incorporated into the report where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies to the Secretary of the Treasury, the Commissioner of Internal Revenue and other interested parties. We will also make copies available to others on request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-9110 or whitej@gao.gov or Kevin Daly at (202) 512-9040 or dalyke@gao.gov. Key contributors to this report are listed in appendix III. A parent corporation that locates in a tax haven country may reduce U.S. tax on corporate income by shielding subsidiaries from U.S. taxation and by providing opportunities for shifting of U.S. source income to lower tax jurisdictions. Such a corporation could have an advantage because it is able to have a lower marginal tax rate on U.S. contract income than its domestic competitors or other foreign competitors. The simple qualitative model in this appendix specifies a set of conditions under which corporations with a tax haven parent may have a lower marginal U.S. tax rate. The principal means by which a parent corporation that locates in a tax haven country may have lower U.S. tax liabilities are as follows. The corporation pays no U.S. tax on what would have been its foreign source income if it were located in the United States. To the extent that foreign subsidiaries are owned by a foreign parent, the U.S. corporate- level taxation of foreign operations is eliminated. Tax savings would come from not having to pay tax on the corporate group’s foreign income. The corporation may be able to shift income outside of the United States by increasing payments to foreign members of the group. The corporation may engage in transfer pricing abuse, whereby related parties price their transactions artificially high or low to shift taxable income out of the United States. Transfer pricing abuse can also occur when the foreign parent charges excessive interest on loans to its U.S. subsidiary. Interest deductions can also be used to shift income outside the United States through a technique called earnings stripping. Using this technique, the foreign parent loads the U.S. subsidiary with a disproportionate amount of debt, merely by issuing an intercompany note, thereby generating interest payments to the parent and interest deductions against U.S. income for the subsidiary. The subsidiaries would still be subject to the thin capitalization rules (I.R.C. section 163 (j)) that limit the deductibility of interest to 50 percent of adjusted taxable income whenever the U.S. subsidiary’s debt-equity ratio exceeds 1.5 to 1. When a parent corporation locates in a tax haven country, the elimination of U.S. corporate-level taxation of foreign operations can reduce taxes on foreign income. However, these tax savings are unrelated to the taxes paid on income derived from the contract and have no effect on the tax cost of the contract. Any income earned by the U.S. subsidiary from a contract for services performed in the U.S. would be U.S. taxable income. Therefore, the elimination of the corporate-level taxation of foreign operations provides no competitive advantage to a corporation that is competing for a U.S. government contract. A corporation has a U.S. tax advantage in competing for a government contract when it would pay a lower marginal U.S. tax rate on the income from that contract than would the other companies competing for that same contract. The available data are not sufficient to measure marginal rates accurately. However, the likelihood that the rates are lower for some companies than others can be inferred from their current tax liabilities. The manipulation of interest payments and other transfer pricing can reduce U.S. taxable income. We can infer that the corporation may have a lower marginal tax rate on its U.S. contract income if the manipulation allows a corporation that would otherwise have positive taxable income to reduce its taxable income (excluding the net income from the contract) to a negative amount. Table 3 shows a set of situations, or cases, in which a corporation may and may not have a cost advantage when bidding on a contract. In order to use this model to identify corporations with a tax cost advantage, we make two assumptions: (1) corporations with positive U.S. taxable income pay tax at the same rate based on the schedule of corporate tax rates (that is, their income before the contract income puts them in the same tax bracket) and (2) corporations with negative income have sufficient losses to offset income from the contract. With these assumptions, we can draw inferences about relative marginal tax rates for the three cases. A U.S. corporation that has positive U.S. taxable income (before taking the income from the contract into account) and has a parent located in a tax haven country does not have a competitive advantage compared to a U.S. corporation with positive income (Case 2). Because they have positive income and pay the same rate of tax, neither has a lower marginal tax rate than the other. Likewise, a corporation with a tax haven parent that has U.S. tax losses and zero tax liability would not have an advantage compared to another corporation with tax losses (Case 3). Because the marginal tax rate is zero for both these corporations and they have sufficient losses to offset the contract income, neither has a tax cost advantage. However, a corporation that has a tax haven parent and U.S. tax losses would have an advantage when compared to a corporation with positive income (Case 1). In this case, the corporation with losses has a zero marginal rate, which provides a tax cost advantage compared to a corporation with taxable income and a positive marginal rate. The assumption that a corporation with zero tax liability has sufficient losses to offset contract income may not be true in particular instances. For example, a corporation may obtain more than one contract (in the public or private sector) and the marginal tax rate on income from a particular contract will depend on how the losses are allocated across income from all the contracts. However, a corporation with zero tax liability is more likely to be able to offset the additional income than a corporation with positive tax liability. In this sense, tax liability is an indicator of the ability to offset income from the contract. The qualitative model does not identify the causes of the advantage. The tax losses that confer the advantage may be due to income shifting, but may also be due to other factors. In addition, the model does not show the size of the advantage in terms of tax dollars saved. The amount saved depends, in part, on the amount of additional income from the contract. If the contractor with no tax liability has insufficient losses to offset the additional income, it would pay taxes on at least part of the income, reducing the potential advantage compared to contractors that have positive tax liabilities. Lastly, the model is used to identify tax haven contractors that meet the conditions for having a competitive advantage with respect to income from the contract in 2000 and 2001. The data do not indicate whether they have an overall tax advantage on a contract that produces income in other years. The additional table of tax liabilities and interest expense for 2000 is provided for comparison with the data reported in the letter. It shows substantially the same pattern. Table 4 shows that in 2000, tax haven contractors had greater interest expense and lower tax liabilities relative to gross receipts than domestic or all foreign contractors. The pattern of tax liabilities and interest expense in 2000 is the same as in 2001 in all respects except one: the ratio of interest expense to gross receipts for tax haven noncontractors is lower than the ratio for domestic or all foreign contractors in 2000. The greater interest expense associated with lower tax liabilities may indicate, but does not prove, that the tax haven contractors have used techniques like earnings stripping to shift taxable income outside the United States. Amy Friedheim, Donald Marples, Samuel Scrutchins, James Ungvarsky, and James Wozny made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | The federal government was involved in about 8.6 million contract actions, including new contract awards, worth over $250 billion in fiscal year 2002. Some of these contracts were awarded to tax haven contractors, that is, U.S. subsidiaries of corporate parents located in tax haven countries. Concerns have been raised that these contractors may have an unfair cost advantage when competing for federal contracts because they are better able to lower their U.S. tax liability by shifting income to the tax haven parent. GAO's objectives in this study were to (1) determine the conditions under which companies with tax haven parents have a tax cost advantage when competing for federal contracts and (2) estimate the number of companies that could have such an advantage. GAO matched federal contractor data with tax and location data for all large corporations, those with at least $10 million in assets, in 2000 and 2001, in order to identify those companies that could have an advantage. There are conditions under which a tax haven contractor may have a tax cost advantage (lower tax on additional income from a contract) when competing for a federal contract. The extent of the advantage depends on the relative tax liabilities of the tax haven contractor and its competitors. One way for a contractor to gain a tax cost advantage is by reducing its U.S. taxable income from other sources to less than zero and by using its losses to offset some or all of the additional income from a contract, resulting in less tax on the contract income. A company would thereby gain an advantage relative to those competitors with positive income from other sources and may be able to offer a lower price or cost for the contract. While some domestic corporations may also have a tax cost advantage, tax haven contractors may be better able to reduce U.S. taxable income to less than zero because of opportunities to shift income to their tax haven parents. Whether a contractor has a tax cost advantage in competing for a particular contract depends on the tax liabilities of other competitors. Also, the contractors with a tax cost advantage are not necessarily the successful competitors because the tax cost savings may not be reflected in actual prices, and prices may be only one of several factors involved in awarding contracts. Using tax liability as an indicator of ability to offset contract income, GAO found that large tax haven contractors in both 2000 and 2001 were more likely to have a tax cost advantage than large domestic contractors. In 2000, 56 percent of the 39 large tax haven contractors reported no tax liability, while 34 percent of the 3,253 large domestic contractors reported no tax liability. In 2001, 66 percent of large tax haven contractors and 46 percent of large domestic contractors reported no tax liability. |
Threats to IT systems, both intentional and unintentional, are evolving and growing. Unintentional or nonadversarial threat sources include failures in equipment, environmental controls, or software due to aging, resource depletion, or other circumstances that exceed expected operating parameters. These threats also include natural disasters and failures of critical infrastructure on which the organization depends but are outside of the control of the organization. Intentional or adversarial threats include individuals, groups, entities, or nations that seek to leverage for illegal purposes the organization’s dependence on cyber resources (i.e., information in electronic form, information and communications technologies, and the communications and information-handling capabilities provided by those technologies). Threats can come from a wide array of sources, including corrupt employees, criminal groups, and terrorists. These threat adversaries vary in terms of their capabilities, their willingness to act, and their motives, which can include seeking monetary gain, or seeking an economic, political, or military advantage. Cyber threat adversaries make use of various techniques, tactics, and practices, or exploits, to adversely affect an organization’s computers, software, or networks, or to intercept or steal valuable or sensitive information. Further, adversaries can leverage common computer software programs, such as Adobe Acrobat and Microsoft Office, as a means by which to deliver a threat by embedding exploits within software files that can be activated when a user opens a file within its corresponding program. Appendix II contains tables of the sources of cyber-based threats, as well as descriptions of common cyber exploits, and the tactics, techniques, and practices used by cyber adversaries. Until fiscal year 2016, the number of information security incidents reported by federal agencies to DHS’s United States Computer Emergency Readiness Team (US-CERT) had steadily increased each year. From fiscal year 2006 through fiscal year 2015, reported incidents increased from 5,503 to 77,183, an increase of 1,303 percent. However, the number of reported incidents decreased by 60 percent in fiscal year 2016 to 30,899, as shown in figure 1. Changes in federal incident reporting guidelines likely contributed to the decrease in reported incidents between fiscal years 2015 and 2016. Updated incident reporting guidelines that became effective in fiscal years 2016 and 2017 no longer required agencies to report noncyber incidents or incidents categorized as scans, probes, and attempted access. In addition, an official from DHS’s National Cybersecurity and Communications Integration Center cited the expanded use of the National Cybersecurity Protection System to detect or block potentially malicious network traffic entering networks at federal agencies as another possible reason for fewer reported incidents. Different types of incidents merit different response strategies; however, if an agency cannot identify the threat vector, it could be difficult for that agency to define more specific handling procedures to respond to the incident. As shown in figure 2, incidents with a threat vector categorized as “other” make up 38 percent of the various incidents reported to US- CERT in fiscal year 2016. These incidents and others like them can pose a serious challenge to economic, national, and personal privacy and security. The following examples highlight the impact of such incidents: In April 2017, the Commissioner of the Internal Revenue Service (IRS) testified that the IRS had disabled its data retrieval tool in early March after becoming concerned about the misuse of taxpayer data. Specifically, the agency suspected that personally identifiable information obtained outside the agency’s tax system was used to access the agency’s online federal student aid application in an attempt to secure tax information through the data retrieval tool. In April 2017, the agency began notifying taxpayers who could have been affected by the breach. In October 2016, the Department of the Treasury’s Office of the Comptroller of the Currency notified us of a major incident it had identified in September 2016. Concurrent with a new policy that restricted employees’ use of removable media devices to prevent users from downloading information onto the devices without approval and review, the agency began reviewing employee downloads to removable media devices. During the review, it identified a significant change in download patterns for a former employee in the weeks before the employee’s separation from the agency. The former employee had downloaded approximately 28,000 files that may have contained controlled unclassified information onto two encrypted external thumb-drive devices. As of October 2016, the agency had been unable to recover the devices storing the files. Congress enacted FISMA to improve federal cybersecurity and clarify governmentwide responsibilities. As amended in 2014, the act is intended to address the increasing sophistication of cybersecurity attacks, promote the use of automated security tools with the ability to continuously monitor and diagnose the security posture of federal agencies, and provide for improved oversight of federal agencies’ information security programs. Specifically, the act clarifies and assigns additional responsibilities to OMB, DHS, and federal agencies in the executive branch, including: Develop and oversee the implementation of policies, principles, standards, and guidelines on information security in federal agencies except with regard to national security systems. Require agencies to identify and provide information security protections commensurate with assessments of risk to their information and information systems. Ensure that DHS carries out its FISMA responsibilities. Coordinate information security policies and procedures with related information resources management policies and procedures. Report annually, in consultation with DHS, on the effectiveness of information security policies and practices, including a summary of major agency information security incidents, an assessment of agency compliance with NIST standards, and an assessment of agency compliance with breach notification requirements. Ensure that data breach notification policies and guidelines are periodically updated and require notification to congressional committees and affected individuals. Ensure development of guidance for evaluating the effectiveness of an information security program and practices, in consultation with DHS, the Chief Information Officers Council (CIO) Council, the Council of the Inspectors General on Integrity and Efficiency (CIGIE), and other interested parties, as appropriate. Administer the implementation of agency information security policies and practices for non-national security information systems, in consultation with OMB, including. Assist OMB in fulfilling its FISMA authorities, including the development of policies and oversight of agencies’ compliance with FISMA requirements; Develop, issue, and oversee implementation of binding operational directives to agencies, such as those for incident reporting, contents of annual agency reports, and other operational requirements; Monitor agency implementation of information security policies Convene meetings with senior agency officials to help ensure their effective implementation of information security policies and practices; and Operate the federal information security incident center, deploy technology to continuously diagnose and mitigate threats, compile and analyze data on agency information security, and develop and conduct targeted operational evaluations, including threat and vulnerability assessments of systems. Establish standards for categorizing information and information systems according to ranges of risk-levels (See Federal Information Processing Standards 199 and 200); Develop minimum security requirements for information and information systems in each of the risk categories; Develop guidelines for detection and handling of information security Develop guidelines, in conjunction with the Department of Defense, for identifying an information system as a national security system. Executive branch agencies’ responsibilities Develop, document, and implement an agencywide information security program that includes the following components: periodic risk assessments, which may include using automated tools consistent with NIST standards and guidelines; policies and procedures that (1) are based on risk assessment, (2) cost-effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the lifecycle of each system, and (4) ensure compliance with applicable requirements; plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate; security awareness training to inform personnel of information security risks and of their responsibilities for complying with agency policies and procedures, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk (but no less than annually); such testing should include using automated tools consistent with NIST standards and guidelines; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in the information security policies, procedures, and practices; procedures for detecting, reporting, and responding to security incidents, which may include using automated tools; and plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. Comply with DHS binding operational directives in addition to OMB policies and procedures and NIST standards. Ensure that senior officials carry out assigned responsibilities and that all personnel are held accountable for complying with the agency’s information security program. Report major security incidents to Congress within 7 days. In addition, executive branch agencies are to report annually to OMB, certain congressional committees, and the comptroller general of the United States on the adequacy and effectiveness of their information security policies, procedures, and practices, and their compliance with the act. Further, FISMA requires agencies to include descriptions of major incidents in these annual reports. It also requires each agency inspector general, or independent auditor, to annually assess the effectiveness of the information security policies, procedures, and practices of the agency. Each year, OMB requires agencies to report how much their agency spends on information security. In fiscal year 2016, each of the 23 civilian agencies covered by the CFO Act reported spending between $3 million and about $1.3 billion on IT security-related activities. Agency reported spending on IT security-related activities ranged between 1 percent and 22 percent of the agencies’ IT budget and between 1 percent and 21 percent of their reported IT spending, as seen in table 1. Weaknesses in security controls such as access controls, configuration management, and security management, indicate that agencies did not adequately or effectively implement information security policies and practices during fiscal year 2016. Further, our work and reviews by inspectors general highlighted information security control deficiencies at agencies that expose information and information systems supporting federal operations and assets to elevated risk of unauthorized use, disclosure, modification, and disruption. Accordingly, we and agency inspectors general have made hundreds of recommendations to agencies to address these security control deficiencies, many of which have not yet been implemented. Our reports, agency reports, and inspectors general assessments of information security controls during fiscal year 2016 revealed that most of the 24 agencies covered by the CFO Act had weaknesses in each of the five major categories of information system controls: access controls—the policies and practices that limit or detect access to computer resources (data, programs, equipment, and facilities), thereby protecting them against unauthorized modification, loss, and disclosure; configuration management controls—the policies and practices that are intended to prevent unauthorized changes to information system resources (e.g., software programs and hardware configurations) and to assure that software is current and known vulnerabilities are patched; segregation of duties—the policies, practices, and organizational structure that prevent an individual from controlling all critical stages of a process by splitting responsibilities between two or more organizational groups; contingency planning—the policies, plans, and practices that help avoid significant disruptions in computer-dependent operations; and agencywide security management—the policies, processes, and practices that provide a framework for ensuring that risks are understood and that effective controls are selected, implemented, and operating as intended. The number of agencies with information security weaknesses in each of the five categories for fiscal year 2016 is shown in figure 3. In the following subsections, specific information security weaknesses are discussed that we identified in our analysis of fiscal year 2016 reports reviewed. Agencies design and implement access controls to provide assurance that access to computer resources (data, equipment, and facilities) is reasonable and restricted to authorized individuals. These controls protect computer resources from unauthorized use, modification, disclosure, and loss by limiting, preventing, or detecting inappropriate access to them. Access controls involve six critical elements: sensitive system resource protection; auditing and monitoring; and physical security. For fiscal year 2016, our analysis identified 516 access control weaknesses at the 24 agencies. The agencies exhibited the most weaknesses in the identification and authentication, authorization, and audit and monitoring critical elements, as shown in table 2. Most of the 24 agencies did not adequately protect information system boundaries. Boundary protection controls logical connectivity into and out of networks and controls connectivity to and from devices that are connected to a network. In fiscal year 2016, our analysis identified that 20 of the 24 agencies had weaknesses in boundary protection, including not blocking unsecure network traffic and not filtering sensitive data. In addition, our analysis identified other boundary protection weaknesses, such as not authorizing interconnection security agreements for all external systems with connections to internal systems and not requiring the Internet to be accessible only through a trusted Internet connection. Boundary protection-related deficiencies accounted for 56 of the total 516 access control deficiencies identified. Without appropriately controlling connectivity to system resources, agencies risk exploitation of network entry points and access paths by unauthorized users to gain access to sensitive data. The implementation of effective identification and authentication controls is one of the most widely reported access control weaknesses. Identification and authentication controls allow a computer system to identify and authenticate different users so that activities on the system can be linked to specific individuals. Factors used for authentication include something you know (password or personal identification number), something you have (cryptographic identification device or token), or something you are (biometric). Multifactor authentication involves using two or more factors to achieve authentication. In addition, OMB directed agencies to implement the use of personal identity verification (PIV) cards, a form of multifactor authentication, for 85 percent of unprivileged users and 100 percent of privileged users by the end of fiscal year 2016. Our analysis identified weaknesses in identification and authentication controls at all 24 agencies. Based on the reports analyzed, two agencies did not meet the PIV implementation requirement for unprivileged users, five agencies did not meet the requirement for privileged users, and two agencies did not meet the PIV implementation requirement for both unprivileged and privileged users. Identification and authentication related deficiencies accounted for 120 of the 516 total deficiencies found in our analysis. Without implementing adequate logical access controls to appropriately identify and authenticate users, agencies cannot prevent illegitimate users, such as hackers, from accessing systems or restrict legitimate users to only the systems that they need. The implementation of effective authorization controls was a widely reported access control weakness. Authorization is the process of granting or denying access rights and permissions to a protected resource, such as a network, a system, an application, a function, or a file. Agencies should apply the principle of least privilege that requires users to be granted the most restrictive set of privileges needed to perform only the tasks that they are authorized to perform. Our analysis identified that all 24 agencies had weaknesses in implementing effective authorization controls, which accounted for 108 of the 516 access control weaknesses. For example, three agencies did not periodically review user access to ensure that access was appropriate for the user’s job function. In addition, five agencies had active system accounts for separated employees. Without effective authorization controls, agencies cannot appropriately control user accounts, thereby preventing unauthorized actions by authenticated system users. Controls over sensitive system resources are designed to ensure the confidentiality, integrity, and availability of system data, such as passwords and keys during transmission and storage. Cryptography underlies many of the mechanisms used to enforce the confidentiality and integrity of critical and sensitive information. Our analysis showed that more than half of the 24 agencies had weaknesses in protecting sensitive system resources. Of the access control weaknesses reported, 37 of the 516 access control weaknesses were related to the protection of sensitive system resources for 13 of the 24 agencies. For example, three agencies did not effectively use encryption to protect sensitive data. If sensitive system resources are not adequately protected, an individual could gain access to capabilities that would allow the individual to bypass security features and, thereby, be able to read, modify, or destroy information or other computer resources. To establish individual accountability, monitor compliance with security policies, and investigate security violations, it is necessary to determine what, when, and by whom specific actions have been taken on a system. Agencies do so by implementing software that provides an audit trail or logs of system activity, that they can use to determine the source of a transaction or attempted transaction and to monitor users’ activities. In fiscal year 2016, our analysis identified 172 auditing and monitoring weaknesses at 23 of 24 agencies. For example, four agencies did not fully implement effective audit and monitoring controls. Two agencies had audit logs to monitor user activity, but did not review them on a consistent basis. In addition, one agency did not consistently identify, notify, or remediate security incidents to ensure incidents were resolved in a timely manner. Without auditing and monitoring system activity, agencies cannot identify indications of inappropriate or unusual activity, thereby hindering agencies’ capability to detect, report, and respond to security incidents. Physical security controls help protect computer facilities and resources from espionage, sabotage, damage, and theft. Physical security controls include perimeter fencing, surveillance cameras, security guards, locks, and procedures for granting or denying individuals physical access to computing resources. Physical controls also include environmental controls such as smoke detectors, fire alarms, extinguishers, and uninterruptible power supplies. Considerations for perimeter security include controlling vehicular and pedestrian traffic. In addition, visitor’s access to sensitive areas is to be managed appropriately. The fewest number of access control weaknesses were identified about physical security. In fiscal year 2016, our analysis identified 23 physical security weaknesses at 13 agencies, including storing switches associated with a data management system in a shared space accessible to people outside of the agency and not retrieving smart identification and PIV cards used to access federal facilities from separated employees. Without adequate physical security controls, agencies cannot restrict physical access to computer resources or protect them from intentional or unintentional loss or impairment. Overall, our analysis identified access control weaknesses at all 24 agencies. If agencies do not implement security measures to improve access control weaknesses, agencies diminish the reliability of computerized data and increase the risk of destruction or inappropriate disclosure of data. Configuration management controls ensure that changes to information system resources are authorized and systems are configured and operated securely and as intended. Configuration management involves the identification and management of security features for all hardware, software, and firmware components of an information system at a given point. It also systematically controls changes to system configurations during the system’s life cycle. These controls, which limit and monitor access to powerful programs and sensitive files associated with computer operations, include: configuration management policies, plans, and procedures; patch management; and emergency configuration change management. For fiscal year 2016, our analysis identified 223 configuration management weaknesses at 23 of the 24 CFO Act agencies. As shown in table 3, agencies exhibited the most weaknesses in the critical elements of configuration identification, configuration change management, and patch management. Configuration Management Policies, Plans, and Procedures Configuration management procedures should cover employee roles and responsibilities, change control and system documentation requirements, establishment of a decision-making structure, and configuration management training. In addition, configuration management should be included in an entity’s systems development life cycle methodology, which details procedures that are to be followed when systems and applications are being designed, developed, and modified. Many of the 24 agencies did not have processes for developing, documenting, and implementing configuration management policies, plans, and procedures. In fiscal year 2016, our analysis identified that 16 of the 24 agencies had weaknesses in developing, documenting, and implementing configuration management procedures. For example, one agency had not developed configuration management standard operating procedures. Another agency had not developed secure baseline configuration guides for its systems. Further, agencies did not implement secure system design, development, and modification procedures. For example, one agency had a web application design flaw that allowed unauthorized users to read and write to the local file system using a vulnerability identified in a software licensing toolkit. Another agency’s public-facing website was configured to display error messages that revealed the web server version number and the operating system. Deficiencies related to configuration management procedures accounted for 25 of the total 223 configuration management deficiencies identified during our analysis. Without good configuration management, agencies cannot provide strict control over the implementation of system changes and, thus, minimize corruption to information systems. Configuration identification activities involve identifying, naming, and describing the physical and functional characteristics of a controlled item (for example, specifications, design, Internet protocol (IP) address, code, data element, architectural artifacts, and documents). Agencies should manage a current and comprehensive baseline inventory of hardware, software, and firmware, and it should be routinely validated for accuracy. Federal agencies had weaknesses reported in maintaining current configuration identification information. In fiscal year 2016, based on our analysis, 19 of the 24 agencies had weaknesses in maintaining configuration identification information. For example, at least three agencies did not have a complete inventory of the hardware, deployed software version, or software license information for the systems used throughout the agencies. Of the 223 configuration management deficiencies that our analysis identified, 56 were related to configuration identification. If agencies do not maintain a current and comprehensive baseline of hardware, software, and firmware, agencies cannot validate configuration information for accuracy, thereby hindering them from controlling changes made to a system. Configuration change management involves authorizing, testing, approving, tracking, and controlling all configuration changes. A formal change management process allows agencies to create an audit trail to clearly document and track configuration changes. Based on the reports we reviewed, most of the 24 agencies were not properly managing configuration changes. In fiscal year 2016, 20 of the 24 agencies had weaknesses in configuration change management processes, including failing to consistently implement change management procedures for authorizing, testing, and approving system changes; improperly documenting system change requests; and not tracking approved configuration baseline deviations or changes made to the configuration for verification purposes. Configuration change management accounted for 47 of the 223 configuration management deficiencies identified in our analysis. Without a formal configuration change management process, agencies cannot ensure that systems hardware and related programs operate as intended or that no unauthorized changes are introduced. Current configuration information should be routinely monitored for accuracy. Monitoring should address the current baseline and operational configuration of the hardware, software, and firmware that comprise the information system. In addition, security settings for network devices, operating systems, and infrastructure applications need to be monitored periodically to ensure that they have not been altered and that they are set in the most restrictive mode consistent with the information system operational requirements. Our analysis identified weaknesses in monitoring system configuration. In fiscal year 2016, based on the reports we reviewed, 8 of the 24 agencies had weaknesses in configuration monitoring, including not auditing computer resources on a routine basis to ensure compliance with formally approved baseline standards and failing to review and verify the accuracy of system information. Of the 223 configuration management deficiencies identified in these reports, 13 were related to configuration monitoring. Without monitoring configuration information, agencies cannot adequately protect access paths between information systems. In addition, if agencies do not monitor system security settings, they cannot ensure that the systems have not been altered or that they are consistent with operational requirements. Software should be scanned and updated frequently to guard against known vulnerabilities. In addition, security software should be kept current by establishing effective programs for patch management, virus protection, and other emerging threats. Lastly, software releases should be adequately controlled to prevent the use of noncurrent software. Based on the reports we analyzed, patch management was the most prevalent configuration management weakness. In fiscal year 2016, our analysis identified that 22 of the 24 agencies had 82 patch management weaknesses. For example, 7 agencies failed to install patches in a timely manner and 6 agencies continued to use software even though it was no longer supported by the vendor. Program changes may need to be performed on an emergency basis to keep a system operating. For example, some systems must be continuously available so that the operations they support are not interrupted. In these cases, the risk of missing a deadline or disrupting operations may pose a greater risk than that of temporarily suspending program change controls. However, due to the increased risk that errors or other unauthorized modifications could be introduced, emergency changes should be kept to a minimum. Based on the reports that we analyzed, none of the 24 agencies had weaknesses in appropriately documenting and approving emergency changes to the configuration, based on the reports we reviewed. Without proper configuration controls, increased risk exists that security features on agency systems could be inadvertently or deliberately omitted or turned off, or that malicious code could be introduced. Segregation of duties provides reasonable assurance that incompatible duties are effectively separated and ensures that one individual cannot independently control key aspects of a computer-related operation. Such control would allow that individual to take unauthorized actions or gain unauthorized access to assets or records. Critical elements to achieving adequate segregation include: (1) segregation of incompatible duties and establishment of related policies and (2) controlling employee activity. In fiscal year 2016, our analysis identified 49 weaknesses in segregation of duties controls at 22 of the 24 agencies. As shown in table 4, agencies exhibited the most weaknesses in the segregation of incompatible duties critical element. Segregation of Incompatible Duties and Establishment of Related Policies Federal internal control standards specify that key duties and responsibilities for authorizing, processing, recording, and reviewing transactions should be separated. Often, segregation of duties is achieved by splitting responsibilities between two or more organizational groups. Dividing responsibilities this way diminishes the likelihood that errors or wrongful acts will go undetected because the activities of one group or individual will serve as a check on the activities of the other. Agencies had weaknesses in identifying and segregating incompatible duties and establishing related policies. At least seven agencies did not properly segregate personnel responsibilities. For example, one agency combined the roles of the Deputy Chief Information Officer and the Chief Information Security Officer and assigned both to one individual. This meant that one individual performed security control activities at the same time that the person reviewed that activity for compliance with FISMA. Other weaknesses were related to the development of policies and procedures for segregating duties. Deficiencies related to the identification and segregation of duties accounted for 33 of the 49 total segregation-of-duties control deficiencies that we identified in our analysis. If agencies do not effectively segregate incompatible duties and establish related policies, they risk having one individual in control of critical stages of a process, thereby allowing that person to take unauthorized actions or gain unauthorized access to assets or records, possibly without detection. Supervision and review of employee activities on a computer system help make certain that users’ activities are performed in accordance with prescribed procedures, that mistakes are corrected, and that the computer is used only for authorized purposes. In fiscal year 2016, our analysis identified 16 weaknesses in control over employee activity at 12 of the 24 agencies. Weaknesses reported include not preventing or detecting segregation of duties conflicts, failing to restrict access to system software, and not reviewing user activity for suspicious or malicious activity. If agencies inadequately control personnel activities, the agencies could allow mistakes to occur and go undetected and facilitate unauthorized use of a computer. Without adequately segregated duties, agencies increase the risk that erroneous or fraudulent transactions could be processed, improper program changes could be implemented, or computer resources could be damaged or destroyed. System interruptions can result in the loss of the capability to process, retrieve, and protect electronically maintained information, which can cause financial losses, expensive recovery efforts, and inaccurate or incomplete information. Given the implications of system interruptions, agencies should have procedures for protecting information resources and minimizing the risk of unplanned interruptions. Agencies should also have a plan in place to recover critical operations should interruptions occur. The critical elements of contingency planning include: data and operations assessment; damage and interruption prevention; contingency plan testing. For fiscal year 2016, as shown in table 5, our analysis identified 106 contingency planning weaknesses. These agencies exhibited the most weaknesses in contingency planning and contingency plan testing. Agencies should assess the criticality and sensitivity of computerized operations and identify supporting resources. It is important that agencies analyze data and operations to determine which are the most critical and what resources are needed to recover and support them. In fiscal year 2016, our analysis identified 17 data and operations assessment weaknesses at 12 of the 24 agencies. For example, four agencies’ inspectors general reported that their agencies did not consider supply chain threats in their contingency planning. In addition, inspectors general at seven agencies reported that their agency did not incorporate business impact or business process analysis into development of the agencies’ contingency planning. If agencies do not identify or prioritize critical data and operations or identify and analyze the resources supporting them, agencies cannot determine which resources merit the greatest protection and what contingency plans need to be made. Agencies should take steps to prevent and minimize potential damage and interruption to operations. For examples, agencies can implement capabilities to restore data files, which may be impossible to recreate if lost. In addition, agencies can implement thorough backup procedures and install environmental controls. In fiscal year 2016, our analysis identified that damage and interruption prevention weaknesses at 14 of the 24 agencies, including failing to retain incremental or full backups and not having an alternate-site redundancy for key mission support information systems. Other weaknesses included not accurately documenting the alternate processing site and backup procedures. Deficiencies related to preventing damage and interruption accounted for 20 of the total 106 contingency planning deficiencies. If agencies do not adequately implement controls to prevent and minimize interruption, agencies risk losing or incorrectly processing data. According to NIST, contingency planning represents a broad scope of activities designed to sustain and recover critical IT services following an emergency. These plans should be clearly documented, communicated to affected staff, and updated to reflect current operations. In fiscal year 2016, our analysis identified 34 contingency planning weaknesses at 16 of the 24 agencies. For example, at least three agencies had not updated their contingency plans to reflect the current operating environment. Other contingency planning weaknesses included failure to ensure that business continuity and disaster recovery plans were in place. Without a comprehensive contingency plan in place, agencies can lose the capability to process, retrieve, and protect electronically maintained information, which can affect an agency’s ability to accomplish its mission. Testing contingency plans is essential to determining whether they function as intended in an emergency situation. Through this testing, contingency plans can be substantially improved. Our analysis identified 16 of the 24 agencies had weaknesses in testing their contingency plans. At least five agencies failed to periodically test contingency plans for their systems and one did not provide evidence that it tested contingency plans for all its systems. Of the 106 contingency planning deficiencies that we identified in our analysis, 35 were related to contingency plan testing. If agencies do not test their contingency plans, agencies cannot identify weaknesses in the contingency plans or assess how well employees have been trained to carry out their roles and responsibilities in a disaster situation. Overall, without effective contingency planning, agencies are unable to ensure that their systems can operate effectively without excessive interruption and can be recovered as quickly and effectively as possible following a service disruption. An agencywide security program, as required by FISMA, provides a framework for assessing and managing risk, including developing and implementing security policies and procedures, conducting security awareness training, monitoring the adequacy of the entity’s computer related controls through security tests and evaluations, and implementing remedial actions as appropriate. The critical elements for security management include: security management program establishment; risk assessment and validation; security control documentation and implementation; information security weakness remediation; and contractor system review. For fiscal year 2016, our analysis identified 623 security management weaknesses across the 24 CFO Act agencies. As table 6 shows, agencies exhibited the most weaknesses in the security management program establishment and security program monitoring critical elements. An agencywide information security management program is the foundation of a security control structure and a reflection of senior management’s commitment to addressing security risks. The security management program should cover all major systems and facilities and outline the duties of those who are responsible for overseeing security and those who own, use, or rely on an agency’s computer resources. Agencies should have a security management structure in place and all policies, plans, and procedures should be kept up-to-date. In fiscal year 2016, based on the reports we analyzed, 23 agencies had 161 weaknesses in establishing a security management program. These weaknesses included failing to implement an agencywide risk management framework for information security, not ensuring security management policies and procedures are updated, and not designating permanent security management roles and responsibilities. If agencies do not establish a security management program, they may lack a framework and continuous cycle of activity for assessing risk, developing and implementing effective security procedures, and monitoring the effectiveness of these procedures. A comprehensive risk assessment should be the starting point for developing or modifying an agency’s security policies and plans. Risk assessments should consider threats and vulnerabilities at the agencywide, system, and application levels, and consider risks to data confidentiality, integrity, and availability. In addition, NIST guidance states that systems should be granted authorization to operate after an authorizing official reviews the system authorization package and determines the risk associated with the system acceptable. Our analysis identified that 20 of the 24 agencies had weaknesses in assessing and validating risks. For example, at least five agencies allowed systems to continue to operate, even though the system authorizations to operate (ATOs) had expired. Also, risk assessment and validation deficiencies accounted for 70 of the total 623 security management deficiencies. Without a process for periodically assessing and validating risks, agencies cannot ensure that all threats and vulnerabilities are identified and considered, that the greatest risks are addressed, and that appropriate decisions are made regarding which risks to accept or mitigate through security controls. Security Control Documentation and Implementation Security control policies and procedures should consider risk, address general and application controls, and ensure that users can be held accountable for their actions. They should also be documented and approved by management. In fiscal year 2016, 22 of the 24 agencies had 81 security control implementation weaknesses based on the reports we analyzed. For example, at least two agencies failed to develop or document security control procedures and at least three agencies did not update security control procedures. In addition, at least one agency did not implement security control procedures. If agencies do not develop, document, update, or implement security control procedures, they cannot ensure that information security is addressed throughout the life cycle of each agency information system. An ongoing security awareness program should be implemented that includes first-time awareness training for all new employees, contractors, and users; and periodic refresher training for all employees, contractors, and users. In addition, specialized training for those individuals with significant security responsibilities should be offered. Further, all affected personnel should receive and acknowledge understanding the organization’s security policies detailing rules and expected behaviors. In fiscal year 2016, our analysis identified that 20 of the 24 agencies had weaknesses in implementing a security training program. For example, at least four agencies did not track the status of role-based security training for personnel with significant information security responsibilities. Of the total 623 security management deficiencies identified in our analysis, 84 were related to security awareness training. Without an effective security training program, agencies risk having employees or contractors inadvertently or intentionally compromising security. An important element of risk management is ensuring that policies and controls intended to reduce risk are effective on an ongoing basis. Effective monitoring involves agencies performing tests of information security controls to evaluate or determine whether they are appropriately designed and operating effectively. It should also include periodically assessing the appropriateness of security policies and the agency’s compliance with them. In fiscal year 2016, our analysis identified that 21 of the 24 agencies had weaknesses in monitoring their security program, including failing to implement continuous monitoring that requires the validation of compliance with security requirements and not conducting risk management that monitors the selection, implementation, and assessment of security controls. Deficiencies related to security program monitoring accounted for 113 of the total 623 security management deficiencies. Without effectively monitoring agency security programs, agencies cannot ensure that security policies and controls are reducing risk as intended. Agencies should have processes for effectively remediating information security weaknesses. When weaknesses are identified, the related risks should be reassessed, appropriate corrective or remediation actions taken, and follow-up monitoring performed to make certain that the corrective actions are effective. In addition, agencies are to develop plans of actions and milestones (POA&Ms) that describe corrective and remediation actions needed to address identified information security weaknesses. These plans should be based on findings from security control assessments, security impact analyses, continuous monitoring of activities, audit reports, and other sources. Twenty-three of the 24 agencies did not have effective processes for remediating information security weaknesses. For example, at least 10 agencies did not remediate identified information security weaknesses in a timely manner. Of those 10 agencies, at least 7 did not use or effectively manage POA&Ms to track, prioritize, and remediate information security weaknesses. Of the 623 security management weaknesses identified in our analysis, we determined that 58 were related to information security weakness remediation. If agencies do not remediate information security weaknesses in a timely manner or use POA&Ms to track the status of identified weaknesses, agencies are exposed to increased risks that nefarious actors will exploit the weaknesses to gain unauthorized access to information resources. Appropriate policies and procedures should be developed, implemented, and monitored to ensure that the activities performed by third parties are documented, agreed to, implemented, and monitored for compliance. In addition, checks should be performed periodically to ensure that the procedures are correctly applied and consistently followed, including the security of relevant contractor systems and outsourced software development. In fiscal year 2016, our analysis identified 56 weaknesses related to contractor system reviews at 20 of the 24 agencies, including not identifying and maintaining a current system inventory of contractor- operated systems, failing to document or consistently perform procedures for monitoring contractor-operated systems, and failing to perform a formal security assessment of external systems. Without ensuring that external systems are adequately secure, agencies risk having contractors introduce information security risks to their information and systems. Overall, without a well-designed program, security controls may be inadequate; responsibilities may be unclear, misunderstood, or improperly implemented; and controls may be inconsistently applied. Such conditions may lead to insufficient protection of sensitive or critical resources and disproportionately high expenditures for controls over low-risk resources. Our work at federal agencies continues to highlight information security deficiencies in both financial and nonfinancial systems. We have made hundreds of recommendations to agencies to address these security control deficiencies, but many have not yet been fully implemented. The following examples describe the types of risks we found at federal agencies, our recommendations, and the agencies’ responses to our recommended actions. In August 2016, we reported that the Food and Drug Administration (FDA), an agency of the Department of Health and Human Services, had a significant number of security control weaknesses that jeopardize the confidentiality, integrity, and availability of its information systems and industry and public health data. Specifically, FDA had not fully or consistently implemented access controls, which are intended to prevent, limit, and detect unauthorized access to computing resources. FDA also had weaknesses in other controls, such as those intended to manage the configurations of security features on and control changes to hardware and software; plan for contingencies, including system disruptions and their recovery; and protect media such as tapes, disks, and hard drives to ensure information on them was “sanitized” and could not be retrieved after the hardware was discarded. We made 15 recommendations to FDA to fully implement its agencywide information security program. We also recommended that FDA take 166 specific actions to resolve weaknesses in information security controls. The department concurred with our recommendations, has implemented 68 of them, and stated that it is working to address all the recommendations as quickly as possible. The department also stated that FDA has acquired third-party expertise to assist in these efforts to immediately address the recommendations. In May 2016, we reported that the National Aeronautical and Space Administration, Nuclear Regulatory Commission, Office of Personnel Management, and the Department of Veteran Affairs had not always effectively implemented access controls over selected high-impact systems. We reported that weaknesses at these agencies also existed in patching known software vulnerabilities and planning for contingencies. An underlying reason for these weaknesses is that the agencies had not fully implemented key elements of their information security programs. We made recommendations to each of these agencies to fully implement key elements of their information security programs. The agencies generally concurred with the recommendations, with the exception of the Office of Personnel Management. It disagreed with our recommendation regarding the evaluation of security control assessments to ensure comprehensive testing of technical controls. In March 2016, we reported that the Internal Revenue Service had weaknesses in information security controls that limited its effectiveness in protecting the confidentiality, integrity, and availability of financial and sensitive taxpayer data. Specifically, the agency had not always (1) implemented controls for identifying and authenticating users, such as applying proper password settings; (2) appropriately restricted access to servers; (3) ensured that sensitive user authentication data were encrypted; (4) audited and monitored systems to ensure compliance with agency policies; and (5) ensured access to restricted areas was appropriate. In addition, unpatched and outdated software exposed it to known vulnerabilities. An underlying reason for these weaknesses is that the Internal Revenue Service had not effectively implemented elements of its information security program. We made two recommendations to more effectively implement security-related policies and plans. The Internal Revenue Service agreed with our recommendations and stated that it would review them to ensure that its actions include sustainable fixes that implement appropriate security controls balanced against information technology and human capital resource limitations. Inspectors general evaluations of agency information security programs, including their respective agencies’ policies and practices, determined that most agencies did not have effective information security program functions in fiscal year 2016. The inspectors general evaluated the information security programs for the 24 CFO Act agencies for fiscal year 2016 and determined that only 7 of the 24 agencies had information security programs with any functions considered to be effective. Further, inspectors general from 20 of the 23 civilian agencies cited information security as a “major management challenge” for their respective agency. The inspectors general made numerous recommendations to address these and other issues. Appendix III provides an overview of the methodology for the inspector general evaluations of their agencies’ information security programs and the results of their reviews by agency for fiscal year 2016. As required in FISMA, OMB, DHS, NIST, and the agencies’ inspectors general have ongoing and planned initiatives to support the act’s implementation across the federal government. OMB, among other things, oversaw and reported to Congress on agencies’ implementation of information security policies, standards, and guidelines. DHS oversaw and assisted government efforts to provide adequate, risk-based, cost- effective cybersecurity, and NIST developed security standards and guidelines for agencies. Further, agencies’ inspectors general conducted annual independent assessments to determine the effectiveness of their respective agencies’ information security programs and practices in accordance with evaluation guidance developed by OMB. However, the oversight agencies do not have plans or a schedule to evaluate the effectiveness of the maturity model developed for inspectors general to evaluate their agencies’ information security programs. FISMA requires that OMB submit a report to Congress no later than March 1 of each year on the effectiveness of agencies’ information security policies and practices during the preceding year. This report is to include: a summary of incidents described in the agencies’ annual reports; a description of the threshold for reporting major information security a summary of results the annual IG evaluations of each agency’s information security program and practices; an assessment of each agency’s compliance with NIST information an assessment of agency compliance with OMB data breach notification policies and procedures. Although OMB did not meet the deadline of March 1, its annual report to Congress for fiscal year 2016 met the other requirements. Specifically, its report provided an overview of federal cybersecurity, the results of inspectors general evaluations, summaries of agencies’ cybersecurity performance, including security incidents reported to US-CERT, and the results of agencies’ privacy program performance. FISMA also required that OMB develop and oversee the implementation of policies, principles, standards, and guidelines on information security. In addition, FISMA required that OMB amend or revise Circular A-130, its policy regarding managing federal information no later than December 18, 2015, a year after FISMA was enacted. Since we reported in 2015 on FISMA implementation, OMB has developed or revised policies and overseen their implementation as follows: OMB updated and released the revised OMB Circular A-130, Managing Information as a Strategic Resource, for comment in October 2015 and released the final version in July 2016, approximately 7 months after the statutory deadline. This circular, last revised in 2000, established general policy for the planning, budgeting, governance, acquisition, and management of federal information, personnel, equipment, funds, information technology resources and supporting infrastructure and services. According to OMB, the latest revised circular reflects changes in law and advances in technology, and represents a shift from viewing security and privacy requirements as compliance exercises to understanding them as crucial elements of a comprehensive, strategic, and continuous risk- based program at federal agencies. In October 2015, OMB issued Memorandum M-16-04, Cybersecurity Strategy and Implementation Plan (CSIP), a guide for federal agencies instructing them to take actions identified as needed through the 2015 30-day Cybersecurity Sprint. The CSIP’s key actions included directing agencies to identify their high-value assets and critical system architecture in order to understand the potential impact to those assets and architecture from an adverse cyber incident. The CSIP indicated that progress on the identified actions will be tracked through mechanisms such as comprehensive reviews of agency- specific cybersecurity posture (CyberStats). In November 2016, OMB issued Memorandum M-17-05, which included an updated definition of a major information security incident for cyber incident reporting to significantly raise the threshold for an incident to be reported as major. It also updated breach notification policies and requirements for notification to congressional committees and affected individuals. In the updated policy, a breach of personally identifiable information considered to be a major incident, including unauthorized access to 100,000 or more individuals’ PII (an increase from the 10,000 threshold in prior guidance), must be reported to Congress within seven days. In addition, OMB’s guidance included an incident reporting validation process intended to improve the overall quality of incident data reported. Further, FISMA directs OMB to oversee agency compliance with requirements to provide information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification or destruction of information or information systems. To fulfill this responsibility, OMB, in coordination with DHS, conducted 24 CyberStat reviews in fiscal year 2016 to help agencies develop action items that address information security risks, identify areas for targeted assistance, and track performance throughout the year. DHS reported that CyberStat reviews were conducted at 16 CFO Act agencies and 7 non-CFO Act agencies during fiscal year 2016 and resulted in 186 cybersecurity-related recommendations that agencies implemented or were in the process of implementing. In addition, OMB conducted a CyberStat review of the continuous diagnostics and mitigation (CDM) program. These reviews revealed cybersecurity issues across the agencies such as high turnover in information technology leadership positions and other workforce challenges, funding mechanisms adversely impacting agencies’ cybersecurity posture, immature continuous monitoring programs, and challenges meeting goals for implementing strong authentication methods (e.g., PIV cards). Under FISMA, DHS, in consultation with OMB, is responsible for carrying out seven activities, including developing information security policies and practices, such as binding operational directives; and overseeing their implementation. In addition, DHS is required to monitor agency implementation of information security policies and practices, meet with senior agency officials to assist with their implementation, and provide operational and technical assistance to agencies. As required by FISMA, DHS had developed four binding operational directives as of July 2017. These directives instruct agencies to: mitigate critical vulnerabilities discovered by DHS’s National Cybersecurity & Communications Integration Center (NCCIC) through its scanning of agencies’ Internet-accessible systems; participate in risk and vulnerability assessments as well as security architecture assessments conducted by DHS on agencies’ high-value assets; address several urgent vulnerabilities in network infrastructure devices identified in a NCCIC analysis report within 45 days of the directive’s issuance; and report cyber incidents and comply with annual FISMA reporting requirements. DHS also provided common security capabilities for agencies in accordance with the FISMA requirement that the department deploy technology, as requested by agencies, to help agencies continuously diagnose and mitigate against cyber threats and vulnerabilities. For example, the National Cybersecurity Protection System (NCPS) (which includes EINSTEIN) and the CDM program are ongoing DHS initiatives to help secure agency information systems. DHS is accelerating the deployment of CDM and EINSTEIN capabilities to all participating federal agencies to enhance detection of cyber vulnerabilities and protection from cyber threats. NCPS was developed to be one of the tools to aid federal agencies in mitigating information security threats. The system is intended to provide DHS with the capability to provide four cyber-related services to federal agencies: intrusion detection, intrusion prevention, analytics, and information sharing. In January 2016, we reported that NCPS supported a variety of data analytical tools but had limited intrusion prevention and detection capabilities. In addition, while DHS had developed metrics for measuring the performance of NCPS, the department did not gauge the quality, accuracy, or effectiveness of the system’s intrusion detection and prevention capabilities. CDM is to provide federal departments and agencies with commercial off-the-shelf capabilities and tools that identify cybersecurity risks on an ongoing basis, prioritize these risks based upon potential impacts, and enable cybersecurity personnel to mitigate the most significant problems first. DHS and the General Services Administration have partnered to implement a blanket purchase agreement available to government entities to acquire and implement CDM tools. In November 2016, DHS awarded a contract for phase 2 of CDM designed to strengthen policies and practices for the authentication of users. According to FISMA, NIST is to develop information security standards and guidelines, in coordination with OMB and DHS. Specifically, NIST’s Computer Security Division is responsible for developing cybersecurity standards, guidelines, tests, and metrics for the protection of federal information systems. NIST has developed information security guidelines for federal agencies. Specifically, NIST issued a draft of the revised Framework for Improving Critical Infrastructure Cybersecurity (Cybersecurity Framework) in January 2017 in response to feedback and questions received after the original framework’s release. The revised framework includes a new section on cybersecurity measurement, an expanded explanation of using the framework for cyber supply chain risk management, and refinements to authentication, authorization, and identity proofing policies within access controls. In addition, in May 2017, NIST released draft Cybersecurity Framework implementation guidance. The guidance provides federal agencies with approaches to leveraging the framework to address common cybersecurity-related responsibilities. The implementation guidance is intended to assist federal agencies as they develop, implement, and continuously improve their cybersecurity risk management programs. Further, in August 2017, NIST released the initial draft of Special Publication 800-53 Revision 5, Security and Privacy Controls for Information Systems and Organizations. According to NIST, the update provides a comprehensive set of safeguarding measures for all types of computing platforms and includes security and privacy controls to protect the critical and essential operations and assets of organizations and the personal privacy of individuals. Among the changes in the updated version are the integration of different risk management and cybersecurity approaches including the Cybersecurity Framework and the clarification of the relationship between security and privacy to improve the selection of the appropriate risk mitigating controls. FISMA requires that federal agencies’ inspectors general conduct annual independent evaluations to determine the effectiveness of the information security program and practices of their respective agencies based on annually issued OMB guidance. These evaluations are to: test the effectiveness of information security policies, procedures, and practices of a subset of agency information systems, and assess the effectiveness of an agency’s information security policies, procedures, and practices. We previously reported OMB’s FISMA reporting guidance for the inspectors general was not complete and resulted in inconsistent responses to questions in their evaluations. The reporting guidance lacked defined criteria for inspectors general to answer questions about their agencies’ information security program components and arrive at an evaluation of the program’s effectiveness. We recommended that OMB, DHS, the CIO Council, and the Council of the Inspectors General on Integrity and Efficiency (CIGIE) enhance reporting guidance to the inspectors general to achieve more consistent and comparable evaluations. In fiscal year 2015, CIGIE, in coordination with DHS, OMB, NIST, and other key stakeholders, began developing a security capability maturity model as a methodology to provide an in-depth assessment of agency information security programs. The purpose of the maturity model is to: summarize the status of agencies’ information security programs; provide status about what has been accomplished and what still needs to be implemented to improve the information security program to the next maturity level, and help ensure consistency across the IG annual FISMA reviews. The maturity model provides metrics to be used as criteria to evaluate an agency’s information security performance areas or domains defined in the annual OMB guidance to the inspectors general for FISMA evaluations. The inspectors general have implemented the model in phases during their assessments of agency information security programs. In fiscal year 2015, OMB’s guidance with reporting metrics directed the inspectors general to use the security capability maturity model to evaluate only one information security function, their agencies’ information security continuous monitoring process. In fiscal year 2016, the reporting metrics expanded the use of the security capability maturity model for inspectors general to evaluate their agencies’ incident response, as well as information security continuous monitoring programs. OMB, in consultation with DHS, the CIO Council, and CIGIE, issued fiscal year 2017 FISMA reporting metrics and guidance for the inspectors general that encompasses the full implementation of the security capability maturity model for all security functions. The guidance provides reporting requirements across key areas to be addressed in the independent assessment of agencies’ information security programs. Further, the guidance instructs the inspectors general to evaluate their agencies’ information security programs and assess the effectiveness of the programs using the security capability maturity model. It also states that October 31, 2017, is the deadline for agencies to submit their inspectors general metrics to DHS. Applying the maturity model across all the security functions is to help promote consistent and comparable outcomes from the inspectors general independent annual evaluations. Federal guidance and other management practices call for the evaluation of management tools to ensure they are effective. Evaluations of effectiveness should entail assessing whether the tool produces accurate results, can be consistently applied, and is useful in achieving agency objectives. OMB reported that the inspectors general and OMB plan to continue to work together to refine the assessment process and provide methodologies for comparing performance across the government. An official from CIGIE stated that after the full implementation of the security capability maturity model, OMB intends for future guidance to the inspectors general to incorporate measures to address the effectiveness of the model and its use in evaluating agency information security programs. However, the fiscal year 2017 guidance does not include a plan or schedule to determine whether using the security capability maturity model will provide useful results that are consistent and comparable. Until an evaluative component is incorporated into the implementation of the maturity model, OMB will not have reasonable assurance that the inspectors general evaluations of agency information security programs will have consistent and comparable results across all federal agencies as intended. While federal agencies are working to carry out their FISMA-assigned responsibilities, they continue to experience information security program deficiencies and security control weaknesses in all areas including access, configuration management, and segregation of duties. In addition, the inspectors general evaluations of the information security program and practices at their agencies determined that most agencies did not have effective information security program functions. We are not making new recommendations to address these weaknesses because we and the inspectors general have previously made hundreds of recommendations. Until agencies correct longstanding control deficiencies and address our and agency inspectors general’s recommendations, federal IT systems will remain at increased and unnecessary risk of attack or compromise. We continue to monitor the agencies’ progress on those recommendations. Although the inspectors general have continued to implement the security capability maturity model to help ensure more consistency in their program evaluations, OMB, DHS, the CIO Council, and CIGIE have not developed a plan and schedule to evaluate whether the model has achieved useful results that are consistent and comparable as intended. Further, if OMB, DHS, the CIO Council, and CIGIE, are unable to determine whether using the capability maturity model yields consistent and comparable results, they will not have reasonable assurance that agency information security programs have been consistently evaluated. We recommend that the Director of the Office of Management and Budget, in consultation with the Secretary of Homeland Security, and the Chief Information Officers Council, evaluate whether the full implementation of the capability maturity model developed by the Council of the Inspectors General on Integrity and Efficiency ensures that consistent and comparable results are achieved across all federal agencies. (Recommendation 1) We provided a draft of this report to OMB; the Departments of Agriculture, Commerce, Defense, Homeland Security, Housing and Urban Development, and Labor; the National Aeronautics and Space Administration; and the Nuclear Regulatory Commission. Of these agencies, OMB’s Program Analyst from the Office of the Federal Chief Information Officer provided comments via e-mail stating that the agency generally concurred with our recommendation. The official added that OMB will continue to work with the Department of Homeland Security, the Chief Information Officers Council, and the Council of the Inspectors General on Integrity and Efficiency to enhance the capability maturity model, and develop a standard methodology that allows for consistent and comparable results across all federal agencies. In addition, we received written comments from one agency—the Department of Housing and Urban Development—in which it stated that the department had no comment on the draft report. The department added, however, that it is committed to following the established federal laws and guidance and ensuring that its information security program requirements are properly implemented and documented. The department’s comments are reprinted in appendix IV. Further, via e-mail, officials of four agencies—the Department of Agriculture’s Senior Advisor for Oversight and Compliance in the Office of the Chief Information Officer; the Department of Labor’s representative from the Office of the Assistant Secretary for Policy; the National Aeronautics and Space Administration’s audit liaison program manager from the Mission Support Directorate; and the Nuclear Regulatory Commission’s executive technical assistant in the Office of the Executive Director for Operations—responded that their agencies did not have any comments on the draft report. Finally, in addition to OMB, the Department of Defense; the Department of Homeland Security; and the Department of Commerce’s National Institute of Standards and Technology provided technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to the Director of the Office of Management and Budget, the Secretary of Homeland Security, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objectives were to evaluate (1) the adequacy and effectiveness of federal agencies’ information security policies and practices and (2) the extent to which agencies with governmentwide responsibilities have implemented their requirements under the Federal Information Security Management Act of 2002 as amended by the Federal Information Security Modernization Act of 2014 (FISMA). To assess the adequacy and effectiveness of agencies’ information security policies and practices, we analyzed our, agency, and inspectors general (IG) information security-related reports that were issued from October 2015 through January 2017 and covered agencies’ fiscal year 2016 security efforts. We analyzed, categorized, and summarized weaknesses identified in these reports using the five major categories of information security general controls identified in our Federal Information System Controls Audit Manual: (1) access controls, (2) configuration management controls, (3) segregation of duties, (4) contingency planning, and (5) security management controls. We also analyzed, categorized, and summarized the annual FISMA data submissions for fiscal year 2016 by each agency’s inspector general. In addition, we analyzed financial reports for fiscal year 2016 for the 23 civilian federal agencies covered by the Chief Financial Officers Act and Office of Management and Budget’s (OMB) 2017 annual report to Congress on FISMA implementation. Using cybersecurity spending data provided in OMB’s annual FISMA report to Congress and information technology (IT) spending data available on the IT Dashboard, we determined the percentage of IT spending that agencies allotted to IT security in fiscal year 2016. For the first objective, we also determined the reliability of agency- submitted data at six agencies. To select these agencies for each of our prior three FISMA evaluation reports, we sorted the 24 major agencies from highest to lowest using the total number of systems each agency had reported each year; separated them into even categories of large, medium, and small agencies; then selected the median two agencies from each category. For fiscal year 2016, the Departments of Agriculture, Defense, Housing and Urban Development, and Labor; the National Aeronautics and Space Administration; and the Nuclear Regulatory Commission were the remaining agencies not selected in prior reporting cycles. To assess the reliability of the agency-submitted data, we collected and analyzed documentation of agencies’ FISMA reporting processes to determine if they were effective in ensuring the quality of the information reported for FISMA. We also conducted interviews with agency officials to get an understanding of the quality control processes in place to produce the annual FISMA reports. As appropriate, we also interviewed officials from OMB, the Department of Homeland Security (DHS), and the National Institute of Standards and Technology (NIST). While not generalizable to all agencies, the information we collected and analyzed provided insights into various processes in place to produce FISMA reports. Based on this assessment, we determined that the data were sufficiently reliable for the purposes of our objective. To evaluate the extent to which agencies with governmentwide responsibilities have implemented their FISMA’s requirements, we analyzed the provisions of the 2002 and 2014 acts to identify the responsibilities for overseeing and providing guidance for agency information security. We collected documentation of coordination between DHS, OMB, and the IGs to update and refine the FISMA reporting metrics. We also identified DHS-issued binding operational directives, newly issued NIST publications, and other government-wide initiatives to improve federal information security. In addition, we interviewed agency officials to collect information and documentation of their interaction with OMB and DHS for FISMA activities. We conducted this performance audit from October 2016 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Cyber Threats and ExploitsFailures in displays, sensors, controllers, and information technology hardware responsible for data storage, processing, and communications. Failures in temperature/humidity controllers or power supplies. Failures in operating systems, networking, and general-purpose and mission-specific applications. Events beyond an entity’s control such as fires, floods, tsunamis, tornados, hurricanes, and earthquakes. Natural events beyond the entity’s control that are not considered disasters (e.g., sunspots). Failure or outage of telecommunications or electrical power. Failures resulting from erroneous accidental actions taken by individuals (both system users and administrators) in the course of executing their everyday responsibilities. Hackers break into networks for the challenge, revenge, stalking, or monetary gain, among other reasons. Hacktivists are ideologically motivated actors who use cyber exploits to further political goals. Insiders (e.g., disgruntled organization employees, including contractors) may not need a great deal of knowledge about computer intrusions because their position within the organization often allows them to gain unrestricted access and cause damage to the targeted system or to steal system data. These individuals engage in purely malicious activities and should not be confused with nonmalicious insider accidents. Nations, including nation-state, state-sponsored, and state-sanctioned programs use cyber tools as part of their information-gathering and espionage activities. In addition, several nations are aggressively working to develop information warfare doctrine, programs, and capabilities. Criminal groups seek to attack systems for monetary gain. Specifically, organized criminal groups use cyber exploits to commit identity theft, online fraud, and computer extortion. Terrorists seek to destroy, incapacitate, or exploit critical infrastructures in order to threaten national security, cause mass casualties, weaken the economy, and damage public morale and confidence. Unknown malicious outsiders are threat sources/agents that, due to a lack of information, remain anonymous and are unable to be classified as one of the five types of threat sources/agents listed above. Descriptions A method by which threat actors exploit the vulnerabilities of websites frequented by users of the targeted system. Malware is then injected to the targeted system via the compromised websites. A digital form of social engineering that uses authentic-looking e-mails, websites, or instant messages to get users to download malware, open malicious attachments, or open links that direct them to a website that requests information or executes malicious code. An exploit that takes advantage of a system’s insufficient user authentication and/or any elements of cyber-security supporting it, to include not limiting the number of failed login attempts, the use of hard-coded credentials, and the use of a broken or risky cryptographic algorithm. An exploit that takes advantage of the security vulnerabilities of trusted third parties to gain access to an otherwise secure system. An exploit that involves the intentional transmission of more data than a program’s input buffer can hold, leading to the deletion of critical data and subsequent execution of malicious code. An exploit that takes advantage of a network employing insufficient encryption when either storing or transmitting data, enabling adversaries to read and/or modify the data stream. Structured query language (SQL) injection An exploit that involves the alteration of a database search in a web-based application, which can be used to obtain unauthorized access to sensitive information in a database resulting in data loss or corruption, denial of service, or complete host takeover. An exploit that takes advantage of a system’s inability to properly neutralize special elements used in operating system commands, allowing adversaries to execute unexpected commands on the system by either modifying already evoked commands or evoking their own. An exploit that uses third-party web resources to run lines of programming instructions (referred to as scripts) within the victim’s web browser or scriptable application. This occurs when a user, using a browser, visits a malicious website or clicks a malicious link. The most dangerous consequences can occur when this method is used to exploit additional vulnerabilities that may permit an adversary to steal cookies (data exchanged between a web server and a browser), log key strokes, capture screen shots, discover and collect network information, or remotely access and control the victim’s machine. An exploit that takes advantage of an application that cannot, or does not, sufficiently verify whether a well-formed, valid, consistent request was intentionally provided by the user who submitted the request, tricking the victim into executing a falsified request that results in the system or data being compromised. An exploit that seeks to gain access to files outside of a restricted directory by modifying the directory pathname in an application that does not properly neutralize special elements (e.g., ‘…’, ‘/’, ‘…/’) within the pathname. An exploit where malicious code is inserted that leads to unexpected integer overflow, or wraparound, which can be used by adversaries to control looping or make security decisions in order to cause program crashes, memory corruption, or the execution of arbitrary code via buffer overflow. Adversaries manipulate externally controlled format strings in print-style functions to gain access to information and execute unauthorized code or commands. An exploit where the victim is tricked into selecting a URL (website location) that has been modified to direct them to an external, malicious site that may contain malware that can compromise the victim’s machine. Similar to classic buffer overflow, but the buffer that is overwritten is allocated in the heap portion of memory, generally meaning that the buffer was allocated using a memory allocation routine, such as “malloc ()”. An exploit that takes advantage of insufficient upload restrictions, enabling adversaries to upload malware (e.g., .php) in place of the intended file type (e.g., .jpg). An exploit that uses trusted, third-party executable functionality (e.g., web widget or library) as a means of executing malicious code in software whose protection mechanisms are unable to determine whether functionality is from a trusted source, modified in transit, or being spoofed. Exploits facilitated via the issuance of fraudulent digital certificates (e.g., transport layer security and Secure Socket Layer). Adversaries use these certificates to establish secure connections with the target organization or individual by mimicking a trusted third party. An exploit that combines elements of two or more of the aforementioned techniques. Description An adversary may gather information on a target by, for example, scanning its network perimeters or using publicly available information. An adversary prepares its means of attack by, for example, crafting a phishing attack or creating a counterfeit (“spoof”) website. An adversary can use common delivery mechanisms, such as e-mail or downloadable software, to insert or install malware into its target’s systems. An adversary may exploit poorly configured, unauthorized, or otherwise vulnerable information systems to gain access. Attacks can include efforts to intercept information or disrupt operations (e.g., denial of service or physical attacks). Desired malicious results include obtaining sensitive information via network “sniffing” or exfiltration, causing degradation or destruction of the target’s capabilities; damaging the integrity of information through creating, deleting, or modifying data; or causing unauthorized disclosure of sensitive information. Maintain a presence or set of capabilities An adversary may try to maintain an undetected presence on its target’s systems by inhibiting the effectiveness of intrusion-detection capabilities or adapting behavior in response to the organization’s surveillance and security measures. The Federal Information Security Modernization Act (FISMA) of 2014 requires inspectors general (IG) to independently evaluate the effectiveness of their respective agencies’ information security programs and practices. In September 2015, we reported that Office of Management and Budget (OMB) and Department of Homeland Security (DHS) guidance to the inspectors general on conducting and reporting agency evaluations was not always complete and led to inconsistent application. We recommended that DHS and OBM enhance the reporting guidance to facilitate more consistent and comparable inspectors general evaluations. In fiscal year 2015, the Information Technology Committee of the Council of Inspectors General on Integrity and Efficiency (CIGIE), in coordination with DHS, OMB, the National Institute of Standards and Technology (NIST), and other key stakeholders, began the development of a security capability maturity model to provide an in-depth assessment of agency programs in specific areas. The purpose of the CIGIE maturity model is to: summarize the status of agencies’ information security programs based on a five-level capability maturity scale; provide status about what has been accomplished and what still needs to be implemented to improve the information security program to the next maturity level; and help ensure consistency across the OIGs’ annual FISMA reviews. The five maturity levels used in the IG assessment of agencies’ information security programs are defined as follows: Level 1 Ad-hoc – Policies, procedures, and strategy are not formalized; activities are performed in an ad-hoc, reactive manner. Level 2 Defined – Policies, procedures, and strategy are formalized and documented but not consistently implemented. Level 3 Consistently Implemented – Policies, procedures, and strategy are consistently implemented, but quantitative and qualitative effectiveness measures are lacking. Level 4 Managed and Measurable – Quantitative and qualitative measures on the effectiveness of policies, procedures, and strategy are collected across the organizations and used to assess them and make necessary changes. Level 5 Optimized – Policies, procedures, and strategy are fully institutionalized, repeatable, self-generating, consistently implemented and regularly updated based on a changing threat and technology landscape and business/mission needs. OMB’s FISMA evaluation guidance identified 11 information security program domains to be addressed in the evaluations: continuous monitoring management, configuration management, identity and access management, incident response and reporting, risk management, security training, plan of action and milestones, remote access management, contingency planning, contractor systems, and security capital planning. However, in fiscal year 2015, the maturity model addressed only the information security continuous monitoring domain while the other IG FISMA metric domains were evaluated using sets of independent questions. For fiscal year 2016, the capability maturity model’s development continued and expanded to include the incident response domain. Also, CIGIE, OMB and DHS collaborated to align the IG metrics domains with the five function areas in the NIST Framework for Improving Critical Infrastructure Cybersecurity (Cybersecurity Framework): identify, protect, detect, respond, and recover. Table 10 shows the IGs’ FISMA reporting metrics results for the 24 CFO Act agencies by Cybersecurity Framework security function. The IGs’ evaluations and scoring were based on work performed in fiscal year 2016. Based on the IG evaluations, only 10 information security functions at 7 agencies were determined to be effective (i.e., assessed at, Level 4, managed and measureable, or Level 5, optimized) for fiscal year 2016. In addition to the contacts named above, Michael W. Gilmore and Karl W. Seifert (assistant directors), Kenneth A. Johnson (analyst-in-charge), Kiana Beshir, Christopher Businsky, David Plocher, Di’Mond Spencer, and Priscilla Smith made key contributions to this report. | GAO first designated federal information security as a governmentwide high-risk area 20 years ago. First enacted in 2002, FISMA required federal agencies to develop, document, and implement information security programs and have independent evaluations of those programs and practices. As amended in 2014, FISMA assigns responsibilities to OMB, DHS, and NIST. FISMA also includes a provision for GAO to periodically report to Congress on agencies' information security. The objectives of this review are to evaluate (1) the adequacy and effectiveness of agencies' information security policies and practices and (2) the extent to which agencies with governmentwide responsibilities have implemented their requirements under FISMA. GAO categorized information security-related weaknesses reported by the 24 CFO Act agencies, their IGs, and OMB according to the control areas defined in the Federal Information System Controls Audit Manual; reviewed prior GAO work; examined OMB, DHS, and NIST documents; and interviewed agency officials. During fiscal year 2016, federal agencies continued to experience weaknesses in protecting their information and information systems due to ineffective implementation of information security policies and practices. Most of the 24 Chief Financial Officers Act (CFO) agencies had weaknesses in five control areas—access controls, configuration management controls, segregation of duties, contingency planning, and agencywide security management (see figure). GAO and inspectors general (IGs) evaluations of agency information security programs, including policies and practices, determined that most agencies did not have effective information security program functions in fiscal year 2016. GAO and IGs have made hundreds of recommendations to address these security control deficiencies, but many have not yet been fully implemented. The Office of Management and Budget (OMB), Department of Homeland Security (DHS), National Institute of Standards and Technology (NIST), and IGs have ongoing and planned initiatives to support implementation of the Federal Information Security Management Act of 2002 as amended by the Federal Information Security Modernization Act of 2014 (FISMA) across the federal government. OMB, in consultation with other relevant entities, has expanded the use of a maturity model developed by the Council of the Inspectors General on Integrity and Efficiency and used to evaluate additional information security performance areas each year. However, OMB and others have not developed a plan and schedule to determine whether using the security capability maturity model will provide useful results that are consistent and comparable. Until an evaluative component is incorporated into the implementation of the maturity model, OMB will not have reasonable assurance that agency information security programs have been consistently evaluated. GAO recommends that OMB, in consultation with DHS and others, develop a plan and schedule to evaluate whether the full implementation of the capability maturity model developed by the Council of the Inspectors General on Integrity and Efficiency ensures that consistent and comparable results are achieved across all federal agencies. OMB generally concurred with our recommendation. |
The National Strategy for Homeland Security characterizes terrorism as “any premeditated, unlawful act dangerous to human life or public welfare that is intended to intimidate or coerce civilian populations or governments.” This definition includes attacks involving CBRN materials. The National Strategy recognizes that the consequences of such an attack could be far more devastating than those the United States suffered on September 11: “a chemical, biological, radiological, or nuclear terrorist attack in the United States could cause large numbers of casualties, mass psychological disruption, contamination and significant economic damage, and could overwhelm local medical capabilities.” State and local responders share in the responsibility for responding to CBRN events, but local first responders play the key role because they are the first to respond. The first line of defense in any terrorist attack on the United States is its first responder community—police officers, firefighters, emergency medical providers, public works personnel, and emergency management officials. Their role is to protect against, respond to, and assist in recovery from emergency events. Traditionally, first responders have been trained and equipped to arrive at the scene of a natural or accidental emergency and take immediate action. If state and local resources and capabilities are overwhelmed, governors may request federal assistance. In his February 28, 2003, Homeland Security Presidential Directive/HSPD-5, the President designated the Secretary of Homeland Security the principal federal official responsible for domestic incident management. The directive empowered the Secretary to coordinate federal resources used to respond to or recover from terrorist attacks, major disasters, or other emergencies in specific cases. The Secretary, in coordination with other federal departments and agencies, is to initiate actions to prepare for, respond to, and recover from such incidents. The directive also called for the Secretary to develop a National Response Plan to provide the framework for federal interaction with nonfederal entities. In addition, HSPD-8, issued on December 17, 2003, established policies to strengthen first responder preparedness for preventing and responding to threatened or actual domestic terrorist attacks. Among other things, it required DHS to provide assistance to state and local efforts, including planning, training, exercises, interoperability, and equipment acquisition for terrorist events. HSPD-8 also required DHS to coordinate with other federal agencies and state and local officials in establishing and implementing (1) procedures for developing and adopting first responder equipment standards and (2) plans to identify and address national first responder equipment research and development needs. First responders face difficult challenges when they arrive at the scene of an accidental or terrorist release of CBRN agents in an urban environment. Local police, fire, and emergency medical units would be the first on the scene, attempting to control the situation while requesting technical assistance, specialized units, and backup. County and local hazardous materials (hazmat) teams and bomb squads would be among the first units called to augment the first responders. A major terrorist act involving CBRN materials might cause significant casualties among the first responders. It is therefore critical that they be able to quickly identify, locate, characterize, and assess the potential effect of CBRN, explosive, or incendiary threats and communicate this information rapidly and effectively. The primary challenge facing first responders is knowing how to identify and distinguish between CBRN releases. The first responders need to be able to communicate what was released, the quantity of the material released (and its purity, in the case of chemical agents), where it is going, who is at risk, and how to respond. Of ultimate interest are the human health and environmental effects, since exposure to CBRN materials can kill or seriously injure people through their physiological effects. A chemical agent attacks the organs of the human body so as to prevent them from functioning normally. The results are usually disabling and can even be fatal. However, DHS S&T officials said that for biological agents, there “will be no first responders” in the traditional sense of being present while the aerosol cloud is present, and so they are not preferentially exposed in the initial exposure. Follow-up investigation does pose additional risk to the first responders from contamination and reaerosolization, but they can be suitably protected by both personal protective equipment and antimicrobials. The danger that TICs and TIMs will be released in urban areas from industrial and transportation accidents is also of concern. Approximately 800,000 shipments of hazardous materials such as liquid chlorine and ammonia travel daily throughout the United States by ground, rail, air, water, and pipeline. Many are explosive, flammable, toxic, and corrosive and can be extremely dangerous when improperly released. They are often transported over, through, and under densely populated areas, where a release could cause injury or death and significant environmental damage. Both international and domestic accidents illustrate the potentially catastrophic effects of the release of TICs and TIMs. An accidental, large- scale hazardous release in Bhopal, India, in 1984, killed approximately 3,800 people and left thousands of people with permanent or partial disabilities. More recently, on January 6, 2005, in Graniteville, South Carolina, a freight train pulling three chlorine tanker cars and a sodium hydroxide tanker car collided with a train parked on an industrial rail spur. Almost immediately, 11,500 gallons of chlorine gas released from the tankers caused 9 people to die, 8 from inhaling chlorine gas, and at least 529 to seek medical care for possible chlorine exposure. A visible cloud that spread initially in all directions led local emergency officials to issue a shelter-in-place order. South Carolina officials later declared a state of emergency, under which local authorities evacuated 5,453 residents within a mile’s radius of the collision. In contrast to chemical agents, biological agents can multiply in the human body, significantly increasing their effects. Many biological agents are highly virulent and toxic; they may have an incubation period so that their effects are not seen for hours to days. According to DHS, biological attacks that have the greatest potential for widespread catastrophic damage include, but are not limited to, aerosolized anthrax and smallpox. When radioactive materials are incorporated and retained in the body, the tissues in which the materials are concentrated, or in some instances the whole body, can suffer significant radiation injury. Radiation from deposited radiological material is a significant cause of radiation exposures and potential casualties once the airborne plume has passed. (Appendix II lists chemical, biological, and radiological agents and their effects on human health.) Planning scenarios DHS developed for use in federal, state, and local security preparedness illustrate the difficult challenges first responders face in CBRN events and the extent of potential injuries and fatalities. Nine of the 15 possible scenarios in table 1 involve the release of CBRN agents or toxic industrial materials in metropolitan areas. First responders have two primary tools in CBRN events: (1) equipment to identify CBRN materials in the atmosphere and (2) information from plume models and field measurements that track the atmospheric dispersion of CBRN materials. Detection devices identify and confirm CBRN material stimuli by triggering signals or alarms when certain sensitivity and specificity parameters are detected. The sensitivity, specificity, and selectivity of CB detection equipment are key performance characteristics. Biological detection equipment has to be sensitive enough to detect very small amounts of biological agents and also has to have a high degree of specificity in order to distinguish biological agents from harmless biological and nonbiological material in the environment. For chemical detectors, sensitivity is the lowest concentration at which a chemical agent can be detected. As with biological agents, the most challenging aspect of identifying chemical agents with a detector is its selectivity in extracting the agent of interest from other chemicals in the environment. The sensitivity, specificity, and selectivity of CB detection equipment also determine false positive or negative alarm rates. Detectors should have minimal false positive and false negative alarm rates. Information from plume models is intended to help tell first responders— from analyses of the models’ mathematical or computer equations or both—the extent of the contaminated area. In emergency response, plume models are used to provide early estimates of potentially contaminated areas and should be used in combination with data gathered from the field. Model results are used to guide field sampling, data from which, in turn, are used to update plume predictions in a cyclical process until the effects have been accurately characterized. A comprehensive model takes into account the material released, local topography, and meteorological data, such as temperature, humidity, wind velocity, and other weather conditions. Plume modeling requires several accurate components: meteorological data (for example, temperature, humidity, barometric pressure, dew point, wind velocity and direction at varying altitudes, and other related measures of weather conditions); data from global weather models to simulate large-scale weather patterns and from regional and local weather models to simulate the weather in the area of the chemical agent release and throughout the area of dispersion; the source term, or the characteristics or properties of the material that was released and its rate of release (for example, its quantity and purity, vapor pressure, the temperature at which the material burns, particle size distribution, its persistence and toxicity, and height of release); temporal and geographical information (for example, transport and dispersion processes such as whether the agent was initially released during daylight hours, when it might rapidly disperse into the surface air, or at night, when a different set of breakdown and dispersion characteristics would pertain, depending on terrain, and plume height, complex terrain, urban effects, and agent processes such as environmental degradation and decay and growth rates for radiological agents); and information on the potentially exposed populations, such as dose response (conversion of exposures into health effects), animals, crops, and other assets that may be affected by the agent’s release. Current CBRN detection equipment has significant limitations for first responders’ use in an event involving the release of CBRN materials in an urban environment. First, the detection equipment first responders now use for radiological and nuclear incidents cannot detect the dispersal of radiological contamination in the atmosphere. Second, according to DHS, chemical detection equipment is generally inadequate to provide information on the presence of chemical warfare agents at less than lethal but still potentially harmful levels. Third, for biological detection equipment, the handheld assays first responders use do not provide accurate information because of this equipment’s high level of false positives. In addition, BioWatch, the nationwide environmental monitoring system, does not enable first responders to obtain immediate real-time information about the effects of biological pathogens released in the atmosphere. While equipment first responders use for detecting radiological and nuclear materials can detect the presence of significant amounts of these materials, they cannot predict their dispersion in the atmosphere. In addition, current handheld, compact devices such as dosimeters and pagers are not able to detect low energy beta radiation from some isotopes and are not capable of handling rugged and harsh environments. DHS’s Domestic Nuclear Detection Office (DNDO) is responsible for acquiring and supporting the deployment of radiation detection equipment. However, this office has primarily emphasized developing and deploying radiation detection equipment to secure cargo container shipments at U.S. ports of entry to prevent smuggling radioactive material into the United States. DNDO’s Chief of Staff told us that it does not consider its mission to include the development of radiological detection equipment for local first responders to use in identifying the release of radiological materials in the atmosphere. It does not evaluate radiological detection equipment for first responder use in consequence management. We surveyed federal agencies involved with CBRN defense about their mission in relation to radiological detection equipment for first responders. DHS, DOD, DOE, EPA, NIST, and NOAA responded that they do not have specific missions to develop, independently test, and certify detection equipment for use by first responders in detecting radiological materials in the atmosphere. However, DOD and DOE program officials said that first responders can certainly use radiological detection equipment DOD and DOE develop for other missions. In addition, agencies such as DOE and EPA have some capability for tracking airborne radiological materials—a capability that first responders do not have. For example, we previously reported that DOE can deploy teams that use radiation monitoring equipment, including sensors mounted on aircraft and land vehicles, to detect and measure radiation contamination levels and provide information to state and local officials on what areas need to be evacuated. EPA also has its RadNet system for airborne radiation monitoring. According to DHS S&T’s CB Division, significant investments have been made toward the detection of chemical agents, largely led by DOD investments, followed up by investments in the private sector to exploit the marketplace. As a result, a number of options are available for detecting these materials as vapor and liquids. However, according to DHS S&T, current detectors can be used for rapid warning of chemicals (warfare agents and TICs) as vapor but are considered generally inadequate to provide information on the presence of chemical warfare agents at less than lethal but still potentially quite harmful levels—that is, higher than permissible exposure levels. DHS S&T acknowledged that improvements are needed to meet sensitivities necessary for real-time protection of the population and for eliminating a tendency for high false- alarm rates. Improvements are also needed in the selectivity of most common chemical detector platforms. Anecdotal information led DHS S&T to make the following general observations with regard to currently available detectors and their ranking for performance for first responders’ use: Mass spectrometer devices are the most sensitive chemical detectors but are significantly costly and least frequently used by first responders. These devices are also significantly heavier and larger, so that they are typically bench-top, laboratory devices and not robust handheld detectors that are more suitable for field deployment. Ion mobility spectrometers (IMS) and surface acoustic wave (SAW) devices are next in selectivity but encounter frequent false positive responses and are susceptible to interference by common materials such as cleaners, pesticides, paint fumes, fire-fighting foams, and combustion products. Hazmat teams and other responders use both types, and they are used in protecting occupants of buildings, transit systems, and the like. However, DHS S&T has assessed the sensitivity of IMS and SAW for V and G nerve agents as being in the low parts per billion (ppb) range— approximately 2 ppb to 20 ppb—while the limit of detection is higher—at 200 ppb to 300 ppb—for blister agents such as mustard and lewisite. According to DHS S&T, these sensitivities would detect some agents at concentrations immediately dangerous to life and health but would not easily detect other agents such as VX at concentrations that are immediately dangerous to life and health. DHS S&T stated that first responders could use IMS, SAW, and similar devices to monitor a condition that is changing from dangerous to tolerable if the detectors were used to provide guidance on the use of personal protective equipment but cannot be used for rapid warning of dangerous conditions. Photo-ionization, flame-ionization, and flame photometric detectors— according to DHS S&T, prone to false positive alarms—can be improved if chromatographic separation techniques are incorporated before analyte streams are presented. However, DHS S&T officials state that few current detectors first responders use have this technology. DHS S&T officials stated that the limitations noted for detectors of chemical warfare agents (cost and size; propensity for false positive alarms) also apply to TICs, many of which can be detected by IMS and SAW devices commonly in use. DHS S&T stated that electrochemical cells (and a variety of slower responding detector tubes) are used to fill the gaps in detection presented by IMS and SAW devices and expand the number of TICs that can be detected. Detection sensitivity of the electrochemical cells can range from ppb to low parts per million (ppm) concentration ranges. In general terms, TICs can be detected at concentrations considerably less than immediately dangerous, ranging in times from seconds to a few minutes, depending on the detector. DHS officials stated that these observations are based on an examination of manufacturers’ claims that in some cases have been independently tested and evaluated. During the emergency response phase of a suspected exposure to a biological threat agent, the only tool most likely available to first responders would be HHAs. HHAs are small test strips that contain an antibody to a specific biological agent. The assays require a suspension of the suspect sample in a liquid supplied with the test assay. Applying the liquid suspension to the strip yields a result in approximately 15 minutes. A quality control test is built into all the strips to indicate whether the assay materials are working properly. “Recent scientific evaluation of these commercially available detection systems concludes that this equipment does not pass acceptable standards for effectiveness. Specifically, Bacillus anthracis detection thresholds for these devices are well above the minimum level that can infect personnel, and are not suitable for determining biological determinants of personnel, rooms, or pieces of equipment. Many devices have been shown to give a significant number of false positives, which could cause unnecessary medical interventions with its own risk.” OSTP’s recommendation was based on a joint evaluation study by the Centers for Disease Control and Prevention (CDC) and the Federal Bureau of Investigation (FBI). Manufacturers of HHAs have expressed concern regarding the study’s methods, objectivity, and overall quality. According to DHS S&T officials, since the 2002 OSTP guidance, DHS has sponsored the development of standards for HHA detection of Bacillus anthracis through AOAC International, AOAC testing of a number of HHAs, and the development and propagation of ASTM International (originally known as the American Society for Testing and Materials) standards for sampling of white powders. ASTM International developed standard E2458, Standard Practices for Bulk Sample Collection and Swab Sample Collection of Visible Powders Suspected of Being Biological Agents from Nonporous Surfaces, published in 2006. This standard was developed by CDC, DHS, EPA, the FBI, and state and local hazmat specialists. DHS S&T officials noted that a biological attack is likely to be covert, and since no visible signatures or odors are associated with a release and people do not immediately fall ill, there will be no indicators for a first responder to know there was an attack. First responders for biological events are not likely to appear on the scene until well after the primary release cloud has dispersed. Therefore, all characterization is likely to be after the atmospheric release cloud has passed. The hazards first responders will encounter are surface contamination and any possible reaerosolization. In that case, S&T officials stated, the information to characterize the affected region is likely to come from environmental sampling (for example, BioWatch, surface sampling, or native air collectors) coupled with plume modeling and, as disease progresses, epidemiological information. BioWatch is a nationwide environmental monitoring system for selected biological pathogens but does not provide first responders real-time detection of them. Under the current BioWatch system, a threat agent is not identified until several hours to more than a day after the release of the agent, and the system does not determine how much material was released. DHS BioWatch officials said that the system gives a qualitative rather than quantitative assessment of the release of biological material. BioWatch is funded and managed by DHS and coordinated with CDC and EPA. LANL and LLNL provide technical support. BioWatch was designed to detect the release of biological pathogens in the air through aerosol collector units installed in several major U.S. cities. The units collect airborne particles on filters, which are transported to laboratories for analysis. Set up very quickly in early 2003, according to DHS BioWatch Program officials, more than 30 jurisdictions now participate in BioWatch. DHS spending for the BioWatch program during fiscal years 2005 to 2007 was about $236 million. The BioWatch network of sampling units collects aerosol samples daily (fig. 1). Each aerosol collector has a single filter that traps aerosol particles. Couriers collect the air filters every 24 hours and deliver them to state or local public health laboratories, where they are tested for the presence of the genetic material of six specific biothreat pathogens. The BioWatch Laboratory assay, however, cannot differentiate between infectious and noninfectious agents (that is, live or dead germs). First responders cannot use BioWatch to immediately determine an adequate response. While BioWatch is a detect-to-treat system designed to detect a biological attack in advance of symptoms arising within a population, it cannot help first responders make immediate medical intervention decisions. BioWatch is not intended to detect a release while it is in progress. It is intended to detect a release as soon after an event as practical and before the onset of symptoms so as to speed the delivery of medical countermeasures. DHS officials stated that BioWatch was not intended as a tool for first responders. A confirmed laboratory test result from a BioWatch sample, known as a “BioWatch Actionable Result,” is a data point used by the local Director of Public Health and BioWatch Advisory Committee to determine if the result has public health significance and, if it does, what actions are necessary to address a potential problem. If a response is necessary, the local jurisdiction’s Incident Management System is used to determine the nature and logistics of the response. First responders may or may not be deployed. The current BioWatch system can detect an aerosol attack with specific threat agents within several hours to more than 1 day after the release of the agents. This period of time includes the sample collection cycle of 24 hours, transportation to public health laboratories, and laboratory analysis to identify and confirm the agents used. According to DHS BioWatch officials, in general, symptoms would not develop until days to weeks after an attack. However, experts have emphasized the importance of “real-time detection” of biological agents as an element of an effective biological detection system. The system should rapidly recognize the release of likely biological agents before the onset of clinical illness. Without the benefit of real-time biological detection, a terrorist biological attack cannot be detected until the clinical analysis of the initial outbreak of patients’ demonstrating symptoms and early fatalities. This delayed detection will allow disease to progress rapidly within the population and grow to potentially epidemic proportions. Real-time detection enables first responders to take action to limit the number of people exposed to the agent, allowing time to warn others before they are exposed and reduce the number of infections. Real time has been defined as 30 seconds or less from the time potential material reaches the device until an alarm is triggered. DHS officials stated that public health officials in the jurisdictions where BioWatch collectors are located can and plan to use BioWatch information immediately to make decisions about responses. They noted that a wide range of decisions is possible and that a specific course of action depends on such factors such as current intelligence about threats, the type of agents detected, the amount detected, the number of BioWatch collectors affected, and information from medical surveillance systems. BioWatch is moving toward next-generation technology, which will provide autonomous collection and detection and better time resolution than current BioWatch collector units. First responders are hampered by the slow development of CBRN equipment detection standards. The CBRN detection equipment that first responders and other DHS grantees buy with DHS grant funds must comply with equipment performance standards adopted by DHS. However, DHS has adopted very few standards for this equipment, and the adoption of accepted standards has lagged behind the pace at which new products enter the market. In addition, according to our survey of federal agencies, DHS has the primary mission to develop, independently test, and certify CB detection equipment for first responders’ use. However, DHS does not independently test and validate whether commercially available CBRN detection equipment can detect specific agents at specific target sensitivities claimed by the manufacturers. DHS’s grant funding to states allows first responders to purchase commercially available CBRN detection equipment. First responders may use DHS’s major grant funding under the State Homeland Security Program (SHSP) and Urban Areas Security Initiative (UASI) to buy equipment from the 21 categories on DHS’s authorized equipment list. Detection equipment, category 7, is available for CBRN detection. For biological detection, for example, this includes field assay kits, protein test kits, DNA and RNA tools, and biological sampling kits, but descriptions and features, models and manufacturers, and operating considerations are not identified. In the states we visited, we obtained information on detection equipment bought with DHS grant funds in 2003–2005. For example, in Seattle and the state of Washington, state agencies, hazmat teams, and local fire departments in 11 counties acquired CBRN detection equipment with about $3.2 million of SHSP and UASI grant funds in 2004–2005. Seattle alone purchased CBRN detection equipment, mostly chemical detection equipment, at a cost of about $500,000, primarily with UASI grants. According to the Assistant Chief of the Seattle Fire Department, about 20 to 26 hazmat teams served nine counties, varying widely in composition and equipment, with small populations and rural teams not having the capabilities of those in urban areas. Connecticut spent about $1.8 million in DHS grants for CBRN detection equipment in 2003–2005. The purpose of standards for equipment is to ensure that equipment meets a minimum level of performance, functionality, adequacy, durability, sustainability, and interoperability. Adopting uniform standards for equipment helps first responders in procuring and using equipment that is safe, effective, and compatible. DHS works with a number of federal agencies and private organizations in developing standards for CBRN detection equipment, including NIST and IAB. DHS’s Standards Subject Area Working Groups and these organizations work, in turn, with standards development organizations such as ASTM and the National Fire Protection Association. DHS’s S&T directorate is the focal point for adopting CBRN detection equipment standards. According to a 2006 DHS Office of Inspector General report on DHS’s adoption of equipment standards, S&T can adopt standards that apply to equipment first responders purchase with DHS grant funds, but it cannot develop mandatory standards for equipment because it has no authority to regulate the first responder community. In addition, DHS S&T has no regulatory authority to compel first responders to purchase equipment not purchased with federal funds that conforms to S&T adopted standards or to order manufacturers not to sell equipment that does not meet these standards. NIST’s OLES identifies needed performance standards and obtains input from others, such as IAB. As of October 30, 2007, DHS had adopted 39 total standards, but only 4 of them were for CBRN detection equipment. In February 2004, it adopted 4 standards for radiation and nuclear detection equipment. These standards address first responders’ priorities for personal radiation detection and devices for detecting, interdicting, and preventing the transport of radioactive material rather than the detection of the atmospheric spread of radiation materials. Table 2 shows standards DHS adopted for radiation and nuclear detection equipment. However, DHS has not adopted any standards for CB detection equipment. The remaining standards address personal protective equipment such as respirators and protective clothing. NIST officials told us that it generally takes 3 to 5 years for an equipment standard to achieve full consensus from the network of users, manufacturers, and standards development organizations before final publication. DHS, however, noted that standards for radiation detection equipment and powder sampling were developed in 12 to 18 months. We surveyed major federal agencies involved with CBRN defense about their missions to develop, independently test, and certify CBR detection equipment for first responders’ use. To certify CBR detection equipment is to guarantee a piece of equipment as meeting a standard or performance criterion into the future. Certification must be based on testing against standards. According to DHS, certification is the attestation that equipment has been tested against standards using approved testing protocols by an accredited test facility. Table 3 shows agency responses to our survey, in which we found that only DHS indicated it has the missions to develop, independently test, and certify CB detection equipment for first responders’ use. According to DHS, DHS’s components, principally the Federal Emergency Management Agency (FEMA) and the Office of Health Affairs, in conjunction with IAB, identify first responders’ needs for CB detection equipment. However, DHS officials stated that their mission to test and certify CB detection equipment is limited to equipment that DHS is developing for first responders; it does not extend to detection equipment they purchase from commercial manufacturers. DHS does not independently test and validate whether commercially available CBRN detection equipment can detect specific agents at specific target sensitivities claimed by the manufacturers. Although manufacturers may test equipment in a controlled laboratory environment using simulants, live agent testing and field testing by independent authorities provides the best indication of performance and reliability. DHS S&T acknowledged that it does not have a testing program to independently test the performance, reliability, and accuracy of commercial CBRN detection equipment and determine whether specific, currently available detectors can detect at specific target sensitivities. No organized DHS evaluation and qualification program now guides and informs first responders on their purchases of chemical, biological, and radiological detection equipment. DHS relies on manufacturers’ claims and anecdotal information in the open literature; it has not routinely tested or verified manufacturers’ claims regarding equipment’s ability to detect hazardous material at specific sensitivities. DHS stated that test data may be found for some systems examined under its earlier Domestic Preparedness Program or other agency programs such as EPA’s Environmental Technologies Verification Program. However, we have not independently evaluated what, if any, CBRN technologies they have evaluated. Moreover, the testing is often at the anecdotal level since few copies of a given detector model are tested in these programs. DHS further stated that because the manufacturers’ claims and, where available, limited testing data for different models of the detector systems are quite varied, compiling data at a reasonable confidence level would require a substantial current market survey. DHS S&T officials said that manufacturers have asked DHS to establish a process for validating biodetection equipment. One official said that first responders are purchasing biodetection equipment that is “junk” because there are no standards and testing programs. Local and state first responders we interviewed also said that they often test and validate manufacturers’ claims on their own. For example, Washington State Radiation Protection officials said that in one instance they tested one brand of new digital dosimeters they were planning to purchase against those they already used. They found that the brand tested consistently read only 40 percent of what their current dosimeters and instruments read. DHS has two programs in place to provide first responders with information about CBRN detection equipment. One program, DHS’s System Assessment and Validation for Emergency Responders (SAVER) program, assesses various commercial systems that emergency responders and DHS identify as instrumental in their ability to perform their jobs. The assessments are performed through focus groups of first responders who are asked for their views on the effectiveness of a given technology based on a set of criteria. The criteria address the equipment’s capability, usability, affordability, maintainability, and deployability. However, DHS officials acknowledged that SAVER neither conducts independent scientific testing to determine the extent to which the equipment can detect actual chemical warfare agents nor tests or verifies manufacturers’ claims regarding the equipment’s ability to detect given hazardous material at specific sensitivities. As of October 2007, SAVER had conducted assessments of IMS chemical detectors, multisensor meter chemical detectors, photo-ionization and flame-ionization detectors, radiation pagers, and radiation survey meters, but it had not tested or verified manufacturers’ claims regarding commercial off-the-shelf CBRN detection equipment’s ability to detect given hazardous material at specific sensitivities. We have not independently evaluated the SAVER assessments. The other information source for first responders is DHS’s RKB, a Web- based information service for the emergency responder community. RKB is a one-stop resource that links equipment-related information such as product descriptions, standards, operational suitability testing, and third- party certifications. As of October 2007, it included 1,127 certifications for equipment on DHS’s authorized equipment list and 268 reports of operational suitability testing of CBRN equipment by such organizations as the U.S. Army’s Edgewood Chemical Biological Center (ECBC). Information available to first responders on CBRN detection equipment sensitivities comes largely from vendors’ claims, either directly from a vendor or through vendor-maintained specification sheets on the RKB, reference guides NIST has developed, and reference guides ECBC has developed. The information in the guides is based on literature searches and market surveys and includes manufacturers’ statements on product capabilities. However, the guides do not contain any testing data that would validate the manufacturers’ claims. The guides, recently incorporated on DHS’s SAVER Web site, also have not kept pace with emerging technology. They include the 2007 ECBC biological detector market survey, the 2005 NIST biological agent detection equipment guide, and the 2005 NIST chemical agent detection equipment selection guide. Federal agencies such as DHS, DOD, DOE, and EPA have developed several nonurban plume models for tracking the atmospheric release of CBRN materials. Interagency studies, however, have concluded that these models have major limitations for accurately predicting the path of plumes and the extent of contamination in urban environments. Current models commonly used in emergency response do not have the resolution to model complex urban environments, where buildings and other structures affect wind flow and the structure and intensity of atmospheric turbulence. DHS’s national TOPOFF exercises have also demonstrated that the use of several competing models, using different meteorological data and exercise artificiality, can produce contradictory results, causing confusion among first responders. Evaluations and field testing show that urban plume models federal agencies have developed specifically for tracking the release of CBRN materials in urban areas have some of the same limitations as the older models used for emergency response. The new models show much variability in their predictions, and obtaining accurate source term data on the release of TICs is also a problem. When using information from nonurban plume models in CBRN events, first responders may have to choose from the multiple models that various agencies support for tracking the release of CBRN materials. Several federal agencies operate modeling systems, including DHS, DOD, DOE, EPA, NOAA, and the Nuclear Regulatory Commission. U.S. interagency studies, however, have concluded that these models have major limitations. For example, according to OFCM, in the Department of Commerce, most of the more than 140 documented modeling systems used for regulatory, research and development, and emergency operations purposes, and for calculating the effects of harmful CBRN materials, are limited in their ability to accurately predict the path of a plume and the extent of contamination in urban environments. Table 4 shows examples of models that federal agencies and first responders have developed and used to predict the path of the plume for multiple CBRN materials. OFCM provides the coordinating structure for federal agencies involved in modeling and has established interagency forums and working groups that have developed studies evaluating models available to address homeland security threats. In an August 2002 study, OFCM and other agencies evaluated 29 modeling systems used operationally by either first responders or federal agencies. The study concluded that (1) few models had been tested or validated for homeland security applications; (2) their ability to predict the dispersal of chemical, biological, or radiological agents through urban buildings, street canyons, and complex terrain was not well developed; and (3) they could provide only a rudimentary description of the nocturnal boundary layer and not the more complex turbulence resulting from complex buildings, terrain, and shorelines. According to DOD officials, many of these models were not developed for emergency response. For example, DOD developed HPAC as a model for counterproliferation purposes, but first responders also use it. In addition, DOD officials said that some of the deficiencies OFCM noted have been somewhat addressed with the development of urban plume models. (We discuss urban plume models later in the report.) A 2003 National Research Council (NRC) study on modeling capabilities reached essentially the same conclusions, stating that plume models in operational use by various government agencies were not well designed for complex natural topographies or built-up urban environments and that, likewise, the effects of urban surfaces were not well accounted for in most models. No one model had all the features deemed critical— (1) confidence estimates for the predicted dosages, (2) accommodation of urban and complex topography, (3) short execution time for the response phase, and (4) accurate if slower times for preparedness and recovery. Both fast execution response models and slower, more accurate models needed further development and evaluation for operational use in urban settings, according to NRC. In urban areas, buildings and street canyons separating them often cause winds that are almost random, making it exceedingly difficult for models to predict or even describe how CBRN materials are dispersed when released. Buildings create complex wind and turbulence patters in urban areas, including updrafts and downdrafts; channeling of winds down street canyons; and calm winds or “wake” regions, where toxic materials may be trapped and retained between buildings. Since most existing models have little or no building awareness, they could be misapplied in urban settings with fatal consequences. According to LLNL modeling experts, misinterpretation of modeling results is a key issue facing first responders. Many users assume that models are more accurate than warranted, because of the impression left by model predictions showing that individual buildings may actually not be accurately predicting fine-scale features, like the location of hot spots and plume arrival and departure times. Obtaining information on the source term, or the characteristics of CBRN materials released, is also a problem with current models, especially in complex urban environments. When modeling is used in an emergency, characterizing the source term and local transport is typically the greatest source of uncertainty. First responders’ key questions are, What was released, when, where, and how much? Locating the source and determining its strength based on downwind concentration measurements is complicated by the presence of buildings that can divert flow in unexpected directions. Answers may not be available or may be based on uncertain and incomplete data that cannot be confirmed. For example, evidence of the release of a biological agent may not be known for days or weeks, when the population begins to show symptoms of exposure, becomes ill, and is hospitalized. Information from four basic categories of models is available to first responders today: 1. Gaussian plume or puff models, widely used since the 1940s, can be run quickly and easily by nonspecialists. They typically use only a single constant wind velocity and stability class to characterize turbulence diffusion. They can be reasonably reliable over short ranges in situations involving homogeneous conditions and simple flows, such as unidirectional steady state flow over relatively flat terrain. The CAMEO/ALOHA model is a Gaussian plume model that has been widely distributed to first responders. 2. Lagrangian models (puff and particle) provide more detailed resolution of boundary layer processes and dispersion. Puff models represent plumes by a sequence of puffs, each of which is transported at a wind speed and direction determined by the winds at its center of mass. Lagrangian particle models use Monte Carlo methods to simulate the dispersion of fluid marker particles. These models can capture plume arrival and departure times and peak concentrations. Examples of models in this category include HPAC (puff model), HYSPLIT and LODI (particle models). 3. Computational fluid dynamics (CFD) are first principles physics models that simulate the complex flow patterns created in urban areas by large buildings and street canyons. CFD models provide the highest fidelity transport and diffusion simulations but are computationally expensive compared to Gaussian or Lagrangian models. They can take hours or days to run on a large computer. However, CFD models can capture plume arrival and departure times and peak concentrations. 4. Empirical urban models are derived from wind tunnel and field experiment data. These models incorporate urban effects by explicitly resolving buildings. Such models are not considered as accurate as CFD models because of their empirical basis, particularly for the highest temporal and spatial resolutions and near-source regions. They need to be carefully validated. Examples include the Urban Dispersion Model and the Quick Urban and Industrial Complex dispersion modeling system. For example, EPA and NOAA developed the CAMEO/ALOHA model specifically for first responders’ use. Widely used by state and local first responders, it originated as an aid in modeling the release of TICs but has evolved over the years into a tool for a broad range of response and planning. CAMEO is a system of software applications used to plan for and respond to chemical emergencies and includes a database with specific emergency response information for over 6,000 chemicals. ALOHA can plot a gas plume’s geographic spread on a map. It employs an air dispersion model that allows the user to estimate the downwind dispersion of a chemical cloud based on the toxicological and physical characteristics of the released chemical, atmospheric conditions, and specific circumstances of the release. However, like any model, CAMEO/ALOHA cannot be more accurate than the information given to it to work with. Even with the best possible input values, CAMEO/ALOHA can be unreliable in certain situations, such as at low wind speeds, very stable atmospheric conditions, wind shifts and terrain steering effects, and concentration patchiness, particularly near the spill source of a release. CAMEO/ALOHA does not account for the effects of byproducts from fires, explosions, or chemical reactions; particulates; chemical mixtures; terrain; and hazardous fragments. It does not make predictions for distances greater than 6.2 miles (10 kilometers) from the release point or for more than an hour after a release begins, because wind frequently shifts direction and changes speed. That using several competing models supported by different agencies can produce contradictory results and confuse first responders was highlighted during DHS’s TOPOFF 2003 and 2005 exercises. The TOPOFF exercises are biennial, congressionally mandated, national counterterrorism exercises designed to identify vulnerabilities in the nation’s domestic incident management capability. They test the plans, policies, procedures, systems, and facilities of federal, state, and local response organizations and their ability to respond to and manage scenarios depicting fictitious foreign terrorist organizations detonating or releasing simulated CBRN agents at various locations in the United States. One important aim is to identify any seams, gaps, and redundancy in responsibilities and actions in responding to the simulated attacks. DHS’s after-action reports for each exercise showed continuing problems in the coordination of federal, state, and local response and in information sharing and analysis. The four TOPOFF exercises conducted 2000–07 are summarized in table 5. TOPOFF 2, 3, and 4 used plume models. In TOPOFF 2, on May 12–16, 2003, federal, state, local, and Canadian responders, leaders, and other authorities reacted to a fictitious foreign terrorist organization’s detonation of a simulated radiological dispersal device, or dirty bomb, in Seattle. It showed the federal government’s inability to coordinate and properly use atmospheric transport and dispersion models. According to DHS internal reports, critical data collection and coordination challenges significantly affected the response to the attack in Seattle and the ability to get timely, consistent, and valid information to top officials. During the exercise, different federal, state, and local agencies and jurisdictions used different plume models to generate predictions, which led to confusion and frustration among the top officials. Seattle and Washington state officials told us that federal agencies provided modeling results not based on the preplanned series of scenario events exercise planners had established. They said that some of the data used to create the differing models had been made up in order to drive a federal agency’s objectives for the exercise and bore no relationship to data that responders gathered at the scene. For example, Seattle City Emergency Management officials from the fire and police departments said that the city was operating on readings it received from the Federal Radiological Monitoring and Assessment Center (FRMAC) while the state modeled a larger area for the plume. Washington state officials also said that the deposition data received from field teams were not consistent with the National Atmospheric Release Advisory Center’s (NARAC) plume modeling predictions. NARAC modeling experts, however, stated that NARAC provided plume model predictions and worked with FRMAC to update model predictions as data became available. NARAC plumes were later found to be consistent with the ground truth used in the exercise. They attributed the disparity of data from the field to plume modeling predictions to exercise artificiality and the improper generation and interpretation of simulated exercise data for state-deployed field teams. Washington State Emergency Management officials stated that the “canned” weather patterns factored into the model conflicted with real- time weather reports. Running counter to typical norms, they went almost directly against the prevailing winds and “straight as an arrow” where the terrain would certainly have diverted their path. Confusion resulted from models being generated using different meteorological inputs. The resulting plume models were contradictory. NARAC/IMAAC modeling experts stated that the exercise called for the ground truth scenario to be based on the canned winds and that contradictory results were obtained by exercise players who did not use the ground truth scenario canned weather. However, NOAA modeling experts said that the ability of the TOPOFF exercises to identify gaps in plume modeling was limited by the use of canned weather patterns. In a real situation, the models would be run with current weather data. Further, in TOPOFF 2, coordination was lacking between state and local and federal plume modeling. For example, the Seattle Emergency Operations Center contacted NARAC after the explosion, as called for in the exercise scenario, to have it generate a prediction of where the plume would travel. NARAC’s product (shown in fig. 2) was provided to the Seattle, King County, and Washington State emergency operations centers, as well as to FEMA and other federal agencies. However, the Washington State Department of Health also generated a plume prediction with a HOTSPOT modeling program, adding to the confusion. In addition, several federal agencies developed their own plume predictions to make internal assessments concerning assets that might be required. As a result, while Seattle, King County, Washington State, and federal officials all had access to NARAC plume modeling results, state and federal agencies still chose to use other available models for information from which to make their preliminary decisions. The confusion over the use of multiple modeling tools in TOPOFF 2 led DHS to establish IMAAC in 2004 as an interagency center responsible for producing, coordinating, and disseminating predictions for airborne hazardous materials. NARAC is the designated interim provider of IMAAC products. According to NARAC and IMAAC program officials, IMAAC’s goals are to provide one point of contact for decision makers, eliminate confusing and conflicting hazard predictions, and distribute “common operating picture” predictions to federal, state, and local agencies with key information such as plume hazard areas, expected health effects, protective action recommendations (such as for sheltering or evacuation), and the affected population. NARAC and IMAAC staffs are available 24 hours a day, 7 days a week, to provide support and detailed analyses to emergency responders. IMAAC does not replace or supplant the atmospheric transport and dispersion modeling activities of other agencies whose modeling activities support their missions. However, IMAAC provides a single point for the coordination and dissemination of federal dispersion modeling and hazard prediction products that represent the federal position during actual or potential incidents requiring federal coordination. IMAAC aims to draw on and coordinate the best available capabilities of participating agencies. It entered into a memorandum of understanding with several agencies in December 2004, including DOD, DOE, EPA, and NOAA, on their roles and responsibilities for supporting and using IMAAC’s analyses and products. According to NARAC and IMAAC operations staff, NARAC and IMAAC can provide an automated prediction for CBRN events within 5 to 15 minutes. TOPOFF 3, conducted April 4 to April 8, 2005, simulated the release of mustard gas and a high-yield explosive in New London, Connecticut. Despite the creation of IMAAC and its mission to coordinate the best available modeling capabilities of federal agencies, TOPOFF 3 revealed continuing problems in coordinating the results of competing modeling outputs. Exercise results from DHS internal reports indicated that IMAAC did not appear to have adequate procedures for dealing with discrepancies or contradictions in inputs or modeling requests from various agencies. Although numerous modeling analyses and predictions were continually refined and confirmed as evidence and field measurements were collected, conflicting and misleading data other agencies submitted on the source of attack and hazard areas resulted in confusion. According to NARAC and IMAAC operations officials, however, IMAAC was continuously in contact with state and local responders to resolve discrepancies in modeling inputs and requests and to correct misinformation. IMAAC provided its first modeling analysis 49 minutes after it was notified of a truck bomb explosion near a large public gathering in New London, Connecticut. The modeling prediction had estimated that a 55-gallon drum of mustard agent could be released in a small explosion involving a small truck and that the public could suffer serious health effects. Connecticut officials said that initial modeling was done when the hazmat teams arrived at the explosion site; NARAC and IMAAC were contacted after 30 minutes, and the hazmat team gave NARAC input. The NARAC modeling analysis was reviewed, but information received from the FBI resulted in tweaks to the model. A second IMAAC modeling analysis more than 2 hours after the explosion determined that the truck explosion had not caused the observed blister agent effects. Instead, reports of a small aircraft flying over the New London City Pier area had led IMAAC to develop another analysis that concluded that only an airplane’s release could have caused the casualties. In fact, about 2 hours before the truck explosion, a small aircraft had flown over the New London City Pier, releasing mustard in a gaseous form over the area. IMAAC operations officials stated that they determined that the bomb could not have caused the mustard gas casualties based on (1) information that exposure victims were reporting at the time of the explosion and (2) its own analysis that the size of the truck bomb explosion would have destroyed virtually all chemicals that might have been associated with the bomb. Five hours after the explosion, IMAAC developed a third modeling analysis, based on the small aircraft’s dumping the mustard agent, estimating that the public gathering at the pier would develop significant skin blistering, consistent with the casualty reports. IMAAC refined this prediction, based on field data received from state and local responders, and a fourth modeling analysis 10 hours after the explosion predicted significant skin exposures and some inhalation effects. NARAC and IMAAC officials stated that IMAAC continuously informed users that its analyses showed that the plane, and not the bomb, was the only source of contamination consistent with available data but was unable to correct other agencies’ misperceptions. Several other agencies insisted that the source of the blister agent was the truck bomb. IMAAC continued during the next day to receive contradictory requests for products that did not incorporate dispersion from an airplane. The Connecticut Department of Environmental Protection requested an updated model run, based on a ground release, and DHS’s S&T instructed IMAAC to produce model runs that did not include the airplane. The Connecticut Joint Field Office also sought plume products that assumed either an air or a ground release but not both. In addition, considerable misleading information came from the field, according to IMAAC operations, as additional field measurements were collected. This misinformation resulted from state officials’ claim that the FBI had determined that the plane contained no chemicals. However, with additional field data, IMAAC conducted another modeling analysis that confirmed that a release from the aircraft was the only plausible source. On the third day, IMAAC, with the full set of 158 field measurements, again confirmed that the airplane’s release was the source. According to Connecticut officials, contradictory data and analysis caused confusion regarding the hazard area and whether to shelter the population in place or evacuate. They stated that they received definitive analyses from IMAAC that would allow people to evacuate their premises. While weather forecasts indicated that rainfall would wash away any mustard gas on the ground, EPA disagreed, interpreting its own data as showing more contamination on the ground. EPA could not, however, explain the origin of these data, and NARAC and IMAAC had no knowledge of them. The issue was finally resolved by deciding not to use the EPA data. Exercise results from DHS internal reports concluded that IMAAC did not appear to have adequate procedures for dealing with discrepancies or contradictions in inputs or modeling requests from various agencies. Among the recommendations made were that IMAAC (1) clarify processes for receiving and reviewing other modeling products, (2) establish a protocol for other modeling agencies to distribute to their consumers on the purpose of IMAAC’s product and guidelines for redistribution, and (3) develop procedures on how IMAAC should handle discrepancies in data inputs or product requests. IMAAC officials do not concur with the exercise findings and conclusions regarding the effectiveness of its federal plume modeling coordination during the exercise. They state that significant progress was demonstrated during TOPOFF 3 in coordinating federal plume modeling despite the fact that TOPOFF 3 was conducted in April 2005, less than a year after IMAAC’s creation and the interagency agreement on its roles. They further state that IMAAC successfully coordinated the federal plume modeling to federal, state, and local agencies. There were no “dueling federal plume models with inconsistent results,” as were observed during TOPOFF 2. However, the exercise did demonstrate a need for procedures for dealing with conflicting modeling requests for various agencies. IMAAC officials state that its procedures now call for an IMAAC Operations Coordinator to coordinate modeling requests and tasking. IMAAC officials said that they were unable to obtain a copy of the internal DHS report on exercise results from TOPOFF 3 and were not given an opportunity to provide input and review and correct the contents of the report. An official in FEMA’s National Exercise Division said that TOPOFF 3 had an established process for obtaining comments from each of the participating agencies and from participants within DHS. However, the official could not explain why IMAAC was not given a copy of the report and a chance to provide comments. An IMAAC Technical Working Group developed the first version of its standard operating procedures in December 2005. However, it described a generalized concept of operations that does not specify procedures for coordinating modeling inputs from other agencies or procedures for CBRN incidents. The initial procedures identified as a key issue the need to clarify the type and scale of what would constitute a major CBRN incident that qualifies for IMAAC assistance. The procedures described the various levels of engagement and notification for activation of IMAAC but did not define the type and scale of what constitutes an incident qualifying for IMAAC assistance. IMAAC’s director said that the use of plume modeling during TOPOFF 2 and 3 primarily showed the lack of coordination among the participants on how to use technology. State and local responders are not required to use IMAAC plots, and IMAAC does not become the single federal point for coordinating and disseminating federal dispersion modeling and hazard prediction products until a significant CBRN event is declared. Agreement must be obtained from all federal agencies before a coordinated response can be implemented. Although officials from DHS’s S&T stated that the concept of operations and specific procedures for CBRN incidents were to be completed by the end of 2006, IMAAC’s standard operating procedures have not yet been revised to (1) develop common/joint IMAAC emergency response practices with federal, state, and local agencies for dealing with contradictory plume modeling information from other agencies during a CBRN event; (2) refine the concept of operations for chemical, biological, and radiological releases; and (3) delineate the type and scale of major CBRN incidents that would qualify for IMAAC assistance. The issue of how a significant CBRN incident is to be defined was clarified in the 2006 National Response Plan Notice of Change, and the new IMAAC activation language has been changed to support “incidents requiring federal coordination.” NARAC and IMAAC officials noted that while these procedures are important, they would not have affected the confusing field information in TOPOFF 3. In addition, operating procedures were meant to cover only the interim period, until the permanent configuration of IMAAC has been determined. TOPOFF 4 was conducted October 15–19, 2007, and used a radiological dispersal device scenario that included coordinated attacks in Guam, Portland, Oregon, and Phoenix, Arizona. On April 10, 2008, FEMA released its initial analysis and impressions of the exercise in an “After Action Quick Look Report.” Regarding plume modeling conducted during the exercise, the report stated that IMAAC provided consequence predictions and that there were no “dueling plume models,” as was observed during TOPOFF 2. According to the report, the processes established after TOPOFF 2 to minimize differences in plume modeling outputs and provide one source for consequence predictions appeared to be effective. IMAAC models were requested and used in all venues and decision makers appeared to understand that the model was only a prediction and would be periodically upgraded as actual data were collected and analyzed. However, the report noted that while most federal, state, and local agencies were familiar with IMAAC and its responsibility for producing consequence predictions, they had difficulty interpreting the plume and consequence models predicting radiation dispersal. Local decision makers had to rely on state and local subject matter experts during the first 24 to 48 hours of the response for immediate protective action recommendations. The report stated that it proved to be a challenge to get that expertise to key state and local decision makers during the exercise. The Chief of the Exercise Division at DHS stated that a better format was needed for decision makers, such as governors and mayors without scientific backgrounds, to use to interpret model predictions and communicate these predictions to the public. Model evaluations and field testing show that plume models federal agencies have developed specifically for tracking the release of CBRN materials in urban areas have some of the same limitations as the older models used for emergency response. Few models have been sufficiently validated against meaningful urban tests, and these models are not yet used regularly in emergency response applications. The urban models show much variability in their predictions, and obtaining accurate source term data is also a problem. Three such models are the Urban Dispersion Model (UDM), Quick Urban and Industrial Complex (QUIC) dispersion modeling system, and CT-Analyst. UDM, a component of the DTRA HPAC modeling suite shown in table 4, is a Gaussian puff model designed to calculate the flow of dispersion around obstacles in an urban environment. According to modeling experts, Gaussian models are fast (less than a second), but their precision is poor. DTRA entered into a cooperative agreement in fiscal year 2000 with the United Kingdom’s Defence Science and Technology Laboratory and Defence Research and Development Canada to develop UDM. The program’s objective was to enhance HPAC models in an urban domain. In fiscal year 2000, the UDM program’s first year, it developed an initial urban modeling capability; it implemented a special version of HPAC in fiscal year 2001, added three new urban modeling components and conducted two dispersion experiments in fiscal year 2002, conducted the largest urban dispersion experiment in history in collaboration with DHS and performed independent verification and validation of the urban modules in fiscal year 2003, and included operational urban capabilities in fiscal year 2004. UDM combines the standard HPAC developed for rural environments with urban canopy wind and turbulence profiles, urban dispersion models, and an urban flow model. It was used at the 2001 U.S. presidential inauguration, 2002 Salt Lake Winter Olympics, 2004 Democratic and Republican conventions in Boston and New York City, and other high-profile events. UDM was subjected to a validation and verification program that compared model predictions against a comprehensive selection of measurements drawn from a database of field experiment trials. It was compared with three different field trials covering ranges from tens of meters to kilometers. Model predictions showed a typical error of greater than 50 percent of the mean, and more than 54 percent of the predictions were within a factor of 2. However, the field trials also showed a trend toward underprediction at close-in distances and overprediction at greater distances from the source. The model was found to overestimate plume width with increasing distance and, as a result, to underestimate plume concentration. The QUIC dispersion modeling system produces a three-dimensional wind field around buildings, accounts for building-induced turbulence, and contains a graphic user interface for setup, running, and visualization. QUIC has been applied to neighborhood problems in Chicago, New York City, Salt Lake City, and Washington, D.C. QUIC has medium speed (1 to 10 minutes) and fair accuracy, according to modeling experts. The Naval Research Laboratory and other groups have developed models, like CT-Analyst, that use CFD for fast-response applications. According to LLNL modeling experts, CFD models provide the highest fidelity simulations of the transport and diffusion of hazardous materials but are computationally more expensive and slow to operate. They can capture transient phenomena, such as plume arrival and departure times and peak concentrations. Accurate knowledge of peak concentrations is critical for determining the effect of many chemical releases, for which the health effects depend on instantaneous or short-term peak exposures rather than time-integrated dose. CFD models can predict the variation of concentrations over small (1-second) time scales and over small grid volumes (about 1 cubic meter). Evaluations and field testing have shown an unpredictable range of uncertainty in urban dispersion models’ analyses. A series of urban field experiments have been sponsored by a number of agencies since 2000. In October 2000, DOE sponsored a meteorological and tracer field study of the urban environment and its effect on atmospheric dispersion. Called Urban 2000, the study included seven intensive nightlong operation periods in which extensive meteorological measurements were made and tracer gases of sulfur hexafluoride and perfluorocarbon were released and tracked across Salt Lake City. Led by DOE and several DOE National Laboratories, the study covered distances from the source ranging from 10 meters to 6 kilometers. DTRA, U.S. Army Dugway Proving Ground, and NOAA also participated. In one evaluation of six urban dispersion models using the Salt Lake City field data, it was found that while the six models did a good job of determining the observed concentrations and source term, there were indications of slight underpredictions or overpredictions for some models and some distances. The urban HPAC model slightly overpredicted at most distances; another evaluation of HPAC found consistent mean overpredictions of about 50 percent. For HPAC model predictions of the lateral distance scale of concentration distribution, HPAC predicted within a factor of 2 only about 50 percent of the time. In another 2003 evaluation, conducted by the Institute for Defense Analyses (IDA), it was found that, in general, urban HPAC overpredicted the observed concentrations and dosages of URBAN 2000. Of 20 model configurations examined (four model types each considered with five weather input options), 19 led to overpredictions of the total observed concentration or dosage. The IDA study concluded that the general overprediction of the URBAN 2000 observations by the Urban HPAC suite is a relatively robust conclusion. HPAC predictions of 30-minute average concentrations or the 2-hour dosage were plagued, in general, by substantial overpredicitons. Model predictive performance was also degraded at the longer downwind distances. An evaluation of QUIC found that the model predicted concentrations within a factor of 2 of the measurements 50 percent of the time. According to LANL modeling experts, QUIC performed reasonably well, slightly underestimating the decay of the concentrations at large distances from the source. However, it also significantly underpredicted lower concentrations at large distances downwind. A field study called Joint Urban 2003 and sponsored by DHS, DOE, and DTRA was conducted in Oklahoma City in July 2003. Its objectives were similar to those of URBAN 2000. The study included a series of experiments to determine how air flows through the urban area both day and night and to learn about the concentrations in the air of sulfur hexafluoride and perfluorocarbon. A 2006 IDA study that used the Joint Urban 2003 data to assess the Urban HPAC capabilities found significant differences in model performance, depending on time of day. Daytime performance was better than nighttime for meteorology inputs but with a large day-night discrepancy. The urban subcomponents of the HPAC model, the urban canopy, urban dispersion model, and urban wind field module all tended to underpredict at day and overpredict at night. A 2007 IDA study confirmed that there was a substantial difference in the performance of Urban HPAC as a function of day and night. For all meteorology inputs IDA used, daytime releases tended to be underpredicted and nighttime releases tended to be overpredicted. LANL found that QUIC model predictions of Joint Urban 2003 tracer releases underestimated concentrations up to a factor of 10. An LLNL assessment of the performance of CFD models that also used data from Joint Urban 2003 found that CFD models did not capture the effects of turbulence and winds caused by nocturnal low-level jets—that is, winds during the night at altitudes of 400 meters above ground. Turbulence generated by these low-level jets can induce mixing that reaches the surface, thereby influencing the dispersion of hazardous materials. The New York City Urban Dispersion Program conducted field studies in March 2005 and August 2005 that evaluated seasonal variations in the New York City area. The aim was to learn about the movement of contaminants in and around the city and into and within buildings and to improve and validate computer models that simulate the atmospheric movement of contaminants in urban areas. Inert perfluorocarbon and sulfur hexafluoride were released to track air movement. More than 200 samplers collected tracer samples at more than 30 locations. Results from the New York City field experiments found that first responders should always use wind directions measured at the tops of tall buildings for making approach and evacuation decisions and that ready availability of building-top winds is essential. According to NOAA modeling experts, however, such data are not always routinely available. NARAC modeling experts also said that wind speeds will not necessarily reflect the complex flows that occur at ground and building levels, where the wind may be moving in completely different directions. In addition, the experiment found that first responders should be aware that hazardous clouds may be encountered one to two blocks upwind from a known or suspected release site, the roofs of nearby tall buildings for street-level releases should not be considered safe havens because of the rapid vertical dispersion around buildings, and wind sensors should not be automatically located with CBRN detectors and winds should not be measured adjacent to CBRN detectors in street canyons in order to interpret the direction or extent of a release location. According to modeling experts, urban modeling systems require additional field evaluation. NOAA’s modeling experts have noted that even after several field studies and evaluations have been conducted, very limited data are available to evaluate models under varying urban and meteorological conditions and to lead the improved simulations of difficult situations such as light winds and at the interface with the environment of buildings, subways, and the like. They believe that additional tracer studies should be conducted to address these issues. LLNL modeling experts stated that funding is not sufficient to make use of all the data generated by field studies in order to improve understanding of key urban processes, evaluate model performance, and build improved urban models. According to unclassified assessments, the most likely type of toxic chemical attack on the United States would involve dual-use chemicals from industrial sources. The 13 highest-priority TICs are inhalation toxics that are shipped in large quantities; the most dangerous are those with low boiling points that are transported as pressurized liquids. According to modeling experts, the highest-priority TICs from the perspective of rail or truck transport are ammonia, chlorine, and sulfur dioxide. They are stored and shipped as pressurized liquefied gases, have low boiling points, and result in dense two-phase (gas and liquid) clouds. Recent rail accidents have shown that these chemicals, released as a dense, two-phase cloud of gas and small but visible aerosol drops, would spread initially in all directions and follow terrain slopes. Modeling experts believe that this area needs improvement in source emissions models. Source emissions formulas and models included in comprehensive, widely used models such as HPAC have been extensively reviewed. A study for the Defense Advanced Research Projects Agency, for example, indicated that while HPAC provides some source emissions algorithms for industrial chemical release scenarios, many emissions scenarios remain difficult to model. It is difficult to model emissions scenarios such as the quick release of pressurized liquid ammonia or chlorine from a rail car or tanker truck, the plume from a burning pool, the geometry and physical and chemical characteristics of a boiling liquid expanding vapor explosion or an intentional explosion, and any release in complex terrain. The 2007 version of HPAC does not consider two-phase releases. In addition, sufficient field data for most real scenarios do not exist because it is too dangerous to carry out a full-size experiment such as the release of the total contents of a rail car carrying chlorine or the explosion of a large propane storage tank. Available source emissions algorithms are based on theory and on small-scale field and laboratory experiments. LANL, the developer of QUIC, has been working to enhance QUIC’s ability to address dense gas two-phase releases in the midst of buildings. LANL has also been enhancing QUIC’s ability to deal with other issues that arise with chemical, biological, and radiological releases in cities: multiple- particle size releases and their deposition characteristics on building surfaces, the buoyant rise of particles after an explosive release of material, and the influence of building-induced winds on buoyant rise and dispersion. DHS and DTRA are also investigating critical data and physics gaps for chemical source term models that need to be solved in order to develop appropriate source term models. In addition, NARAC is improving the capability of its CFD urban model, FEM3MP, to combine complex source terms, dense gas effects, chemical reactions, and building-scale effects. DOD’s development of the Joint Effects Model relies on the ability to extract and derive key information on CBRN source term from available CBRN and meteorological sensors and to use this information to predict the CBRN downwind hazard. According to DTRA, the Joint Effects Model will provide the military with a single validated ability to predict and track CBRN and TIC effects, as well as estimates of the source location and source term and the ability to make refined dispersion calculations. It was scheduled for full operation by fiscal year 2009, and the second increment of JEM, scheduled to be operational by fiscal year 2011, will include the ability to predict hazard areas and effects for urban areas. Urban plume models rely, as we have shown, on a wide range of data, but the difficult challenges in modeling the transport and dispersion of CBRN materials in complex urban settings have shown significant gaps in the data on how CBRN releases would affect urban populations. First, exposure rates the population would experience in an urban environment would be affected by the physical environment and where people work and live. Existing urban databases, however, have significant gaps in both quantity and quality of information on land use and complex urban terrain; knowledge as to where critical populations are located is also needed to focus predictions. Second, scientific research on the health effects of low- level exposure to CBRN material on civilian populations is lacking, especially for vulnerable populations at risk. Urban land use type—residential, commercial, industrial—is used in meteorological models to assign building structure and composition parameters and other surface characteristics to the underlying terrain. Mesoscale meteorological models and many atmospheric plume models do not have the spatial resolution to simulate the fluid dynamics near and around buildings and other urban land features. Urban canopy parameters have been developed to allow plume models to simulate the effects of buildings and urban land features on plume transport and dispersion, wind speed and direction, and turbulent mixing. Accurate urban land use definition is therefore an important component in modeling efforts. The ability to conduct modeling in urban areas, however, is typically limited to the use of a single or simplistic set of land use categories that do not provide explicit information on the effect of buildings and surfaces on the flow and transport of hazardous substances in the air. Determining the structure and composition of urban areas has resulted in the development of large datasets of high-resolution urban features for many of the nation’s largest cities. The National Building Statistics Database, for example, contains data for 17 U.S. cities at a 250- meter grid cell resolution. This database contains mean building heights and other such statistics. It also contains high-rise district footprints for 46 of the most populous cities. In addition, the National Geospatial- Intelligence Agency and the U.S. Geological Survey have created a database of urban building footprints and heights in various cities. Several efforts have been made to improve urban databases for urban plume modeling, such as creating a database for day and night populations. Geographic information that includes population density data is essential for a fast, effective first response to disasters and is the common thread in all planning, response, and recovery activities. Using geographic information systems and remote sensing, ORNL developed LandScan, a global population distribution model, database, and tool from census and other spatial data. LandScan is a collection of the best available census counts for each U.S. county and four key indicators of population distribution—land cover, roads, slope, and nighttime lights. Census tracts are divided into 1-kilometer grid cells, and each cell is evaluated for the likelihood of its being populated on the basis of the four indicators. The total population for each tract is then allocated to each cell, weighted to the calculated likelihood of being populated. ORNL’s LandScan 2006 developed a high-resolution daytime population database. According to DTRA, DOD efforts have added the number and quality of city databases available to 75 cities in the continental United States, with new ones added periodically. DTRA officials stated that enhancements in the UDM suite of urban domain characterizers have significantly improved the overall urban transport and dispersion modeling capability. According to NOAA weather experts, the standard national meteorological observing network does not provide sufficient spatial resolution to resolve local conditions that influence urban plumes. While a number of “mesonets” provide meteorological observations with relatively high spatial resolution over a limited domain, the quality of data from them varies significantly, according to NOAA officials. They stated that to provide reliable data for plume predictions, mesonet design should be considered, the quality of data from relevant mesonets should be characterized, and appropriate data screening and transformation approaches should be developed. Research is required to determine how best to incorporate urban mesonet data into plume models. Establishing Urban test beds has been proposed as a way to provide critical data to improve urban plume modeling. An Urban test bed is a multifunctional infrastructure of atmospheric instruments that provide continuous, multiyear measurement and archival environmental data across a metropolitan area and through the atmospheric boundary layer. An Urban test bed would be used to support improvements in a range of activities from scientific research to user applications. In a September 2004 study, OFCM and other agencies recommended the implementation of multiple Urban test beds. Urban test beds would provide (1) long term, continuous, high-resolution, meteorological observations of the urban domain and (2) long-term measurement and archiving of measurement data on atmospheric processes and modeling in urban environments. NOAA has implemented a dispersion measurement test bed called DCNet in Washington, D.C., to provide dispersion computations for planning and possible response. According to LLNL modeling experts, a major issue has been how to provide cost-effective access to building, land use, population, and other geographic databases as well as local meteorological data, establish common formats for databases, and enforce quality assurance standards. Significant gaps exist in first responders’ information for determining the effects of exposure to CBRN materials on heterogeneous urban populations. Scientific research on the effects of low-level exposure to CBRN material on civilian populations is severely lacking, especially for vulnerable populations such as elderly people, children, and individuals with compromised immune systems. A dose that may not be lethal for a healthy young adult might be lethal for such persons. For example, in the 2001 anthrax attack, many postal workers exposed to high concentrations over a prolonged period did not develop anthrax disease, while an elderly woman in Connecticut with a compromised immune system died, presumably from inhaling very few spores. Data are needed on exposure and dose assessments to identify vulnerable populations and how to adjust individual and population postevent activities and behavior to reduce numbers of casualties. Knowing health effects from exposure to chemical agents depends on a hierarchy of EPA-published chemical exposure limits and chemical dose- response relationships as used in modeling. EPA has assigned three acute exposure guideline levels (AEGL) to TICs that could represent dangerous inhalation exposure from releases to air by accident or terrorist action. AEGLs are threshold exposure limits for the general public and apply to emergency exposure periods ranging from 10 minutes to 8 hours. They are intended to help protect most people in the general population, including those who might be particularly susceptible to the deleterious effects of chemical substances, and are expressed as an airborne concentration in parts per million or milligrams per cubic meter. However, dose response parameters for the general population do not exist for most CB warfare agents believed to pose a threat to civilians. For radiological exposures, DHS and EPA provide Protective Action Guidelines that identify the radiation levels at which state and local officials should take various actions to protect human health during an accident. At AEGL-1, the general population, including susceptible individuals, could experience notable discomfort, irritation, or certain asymptomatic nonsensory effects. The effects are not disabling and are transient and reversible when exposure ceases. At AEGL-2, the experience could be irreversible or could consist of other serious, long-lasting adverse health effects or an impaired ability to escape. At AEGL-3, the experience would be life-threatening or fatal. For chemicals for which AEGLs have not been established, the Emergency Response Planning Guidelines of the American Industrial Hygiene Association are used. If neither EPA nor the Association has established a value for a chemical, then DOE’s temporary emergency exposure limits are used. AEGLs and other estimates attempt to describe the lower end of the dose response curve for particular chemical agents. Dose response parameters for the general population do not exist for most CB warfare agents believed to pose a threat to civilians. LLNL modeling experts stated that for chemical weapon and biological agents, they determine health effects levels from literature reviews. Toxicity estimates for the general population are required for hazard prediction models. Data are needed on exposure and dose assessments to identify populations at risk from primary or secondary contact and how to adjust individual and population postevent activities and behavior to reduce casualties. According to the Armed Forces Medical Intelligence Center, 50 percent lethal concentrations and dosages are unknown for most chemicals, and detailed information on high-volume chemicals and processes is not widely available. Little scientific research has been done on the effects of low-level exposure to CBRN material on civilian populations, especially vulnerable populations at risk. ECBC has the task of providing human chemical warfare agent toxicity estimates for the general population, together with supporting analyses. According to ECBC studies, most of the available toxicological data underlying human toxicity estimates for chemical warfare agents were generated in support of chemical weapons development for offensive battlefield deployment against military personnel, who at the time of the studies were nearly all male. Thus, the available human data represent a very limited segment of the population— relatively young, fit male soldiers. Using military values for civilian scenarios would therefore result in the underestimation of civilian casualties and the overall threat to civilian populations from potential or actual releases. ECBC has been developing mathematical models to estimate general population toxicity values from previously established military values. For example, figure 3 shows dose response curves for the fraction of a healthy military population and of the general population that would be killed by a 2-minute exposure to sarin. Despite several initiatives and investments DHS and other agencies have undertaken since 2001, first responders do not have effective tools to respond to events involving the release of CBRN materials in urban areas. Detection systems are limited in their ability to provide the timely and accurate information first responders need about the release of CBRN materials in urban areas to make decisions on expected health effects and protective action—for example, sheltering and evacuation. Existing nonurban and urban plume models for emergency response to CBRN events have several limitations as a primary tool for tracking the release of CBRN materials in urban areas and for making decisions about handling them. National TOPOFF exercises have also shown the problems and confusion that could occur to first responders’ responses to CBRN events from disparate modeling inputs and results. In addition, more data are needed about the effects of hazardous materials in built-up urban environments. Continued improvements are needed in urban building and population databases and for understanding the health effects from concentrations of hazardous substances, especially on vulnerable populations, so that first responders are properly prepared for addressing airborne releases of harmful materials in urban areas. Led by DHS, ongoing federal efforts have attempted to improve the capabilities of detection systems and models so that first responders can accurately identify CBRN materials released in urban environments, the extent of their dispersion, and their effect on urban populations. For detection equipment, one shortcoming that should be addressed is the lack of emphasis on the development of detection equipment that first responders can use to detect radiological materials in the atmosphere. DHS has recognized the threat of a terrorist attack involving the explosion of radiological dispersal devices—or dirty bombs—and has used this as a scenario in TOPOFF exercises. However, DHS’s development of radiation detection equipment has largely focused on the interdiction of radioactive material rather than on detecting the release of radioactive material into the atmosphere in urban areas. We found that agencies such as DHS, DOD, EPA, NIST, and NOAA do not have missions to develop, independently test, and certify equipment for detecting radiological materials in the atmosphere. Another shortcoming is the lack of a formal DHS system to independently test and validate the performance, reliability, and accuracy of CBRN detection equipment that first responders acquire. While DHS indicated it has missions to develop, independently test, and certify CB detection equipment for first responders’ use, its testing and certification are limited to equipment DHS is developing and does not extend to equipment developed by commercial manufacturers. As we have noted, DHS has no evaluation and qualification program that guides and informs first responders on the veracity of manufacturers’ claims about the performance of their CBRN detection systems. DHS has no control over what manufacturers can sell to first responders and cannot order first responders not to purchase a certain piece of equipment, unless purchased with federal funds. A formalized process needs to be established for the evaluation and validation of manufacturers’ claims regarding commercial biodetection equipment. While existing urban plume models have several limitations as a primary tool for tracking the release of CBRN materials in urban areas, the TOPOFF exercises demonstrated the larger problem of confusion among first responders about the timing, value, and limitations of plume models and other analyses following a CBRN event. At best, models can give a close approximation and can help inform a decision maker on the probable plume. The TOPOFF exercises demonstrated that plume model results developed without the incorporation of field data are only estimates that should be used for guidance but are not an accurate rendition of the actual situation facing first responders. Plume models are most effectively used to provide early estimates of potentially contaminated areas in combination with data gathered from the field. These data, in turn, are used to update plume model predictions. The major weakness of these models is that any real source release is nearly always more complicated than the simple scenarios studied in the field and wind tunnel experiments they are based on. Real sources tend to vary in time and space and to occur when the atmosphere is variable or rapidly changing. A small change in wind direction or height of release can result in a different or a more or less populated area being affected. During the TOPOFF exercises, first responders and decision makers used plume model predictions as real-time information on which to base decisions. In addition, the TOPOFF 2 and 3 exercises demonstrated that while IMAAC is designated the focal point for coordinating and disseminating modeling products, it does not have adequate procedures to deal with discrepancies or contradictions from competing models from various agencies. DHS’s preliminary assessment of the TOPOFF 4 exercise found improvement in IMAAC’s coordination of federal plume modeling to minimize differences in model outputs and provide one source for consequence predictions. However, IMAAC Operations officials said the key to “deconflicting” plume modeling information is to have procedures that are coordinated and integrated with those of first responders and other local emergency response agencies. IMAAC also does not have a concept of operations or specific procedures for significant CBRN incidents. A key issue is the need to clarify the type and scale of what major incident could constitute a potentially significant CBRN event and qualify for IMAAC assistance. We recommend that the Secretary of Homeland Security reach agreement with DOD, DOE, EPA, and other agencies involved with developing, testing, and certifying CBRN detection equipment on which agency should have the missions and responsibilities to develop, independently test, and certify detection equipment that first responders can use to detect hazardous material releases in the atmosphere; ensure that manufacturers’ claims are independently tested and validated regarding whether their commercial off-the-shelf CBRN detection equipment can detect given hazardous material at specific sensitivities; refine IMAAC’s procedures by working with other federal, state, and local agencies to (1) develop common/joint IMAAC emergency response practices, including procedures for dealing with contradictory plume modeling information from other agencies during a CBRN event; (2) refine the concept of operations for chemical, biological, and radiological releases; and (3) delineate the type and scale of major CBRN incidents that would qualify for IMAAC assistance; and in conjunction with IMAAC, work with the federal plume modeling community to accelerate research and development to address plume model deficiencies in urban areas and improve federal modeling and assessment capabilities. Such efforts should include improvements to meteorological information, plume models, and data sets to evaluate plume models. We obtained written comments on a draft of this report from DHS and the Department of Commerce. DHS concurred with our recommendations but stated that GAO should consider other scenarios as alternative ways of looking at the present national capabilities for CBRN response and the current status of testing and certifying detection equipment. DHS stated that in one alternative scenario, first responders, in the event of a terrorist attack, will use a variety of prescreening tools, and they will be assisted immediately by state and federal agencies that will bring the best available state-of-the-art CBRN detection equipment. In our report, we have considered scenarios in which first responders are on the scene before federal assets arrive, not knowing what hazardous materials (including CBRN agents) have been released, either accidentally or by terrorist acts. In these situations, it is the first responder who has to first determine what was released and what tools to use to make that determination before receiving assistance from state and federal agencies. By DHS’s own assessments, these state-of-the-art CBRN detection tools have significant limitations. DHS acknowledged that first responders do not now have any equipment that can detect the dispersion of radiological and nuclear materials in the atmosphere. DHS’s S&T Directorate assessed that while current detectors can be used for rapid warning of chemicals in the vapor phase, they are generally considered inadequate to provide information on the presence of chemical threat agents at less than lethal but still potentially harmful levels. According to DHS’s S&T, HHAs, the tool that first responders would use to detect biological threat agents, do not have the sensitivity to detect the atmospheric concentrations of agents that pose health risks. Moreover, the detection of biological agent aerosols and particulates through the current BioWatch sample collection and laboratory analysis process is time-consuming and labor intensive, with final confirmation occurring long after initial exposure. With respect to testing and validation of commercial CBRN detection equipment available for first responder use, DHS stated that there is no legislative requirement that such equipment for homeland security applications meet performance standards. DHS also believes that it will never be feasible for the federal government to fund testing of all commercial detectors without first assessing their potential merits for detection of CBRN agents because of the very large number of hazardous CBRN agents and the expense of testing detectors against these agents. While there is no legislative requirement that CBRN detection equipment for homeland security meet performance requirements, we noted in our report that DHS does require that commercial detection equipment first responders purchase with DHS grant funds comply with equipment performance standards adopted by DHS. However, DHS has adopted few performance standards for CBRN detection equipment. Without such standards, first responders may purchase detection equipment that does not detect harmful levels or whose performance varies. Without standards, there would be no way to ensure the reliability of the equipment’s detection capabilities. As we indicated in our report, DHS had adopted only four standards for radiation and nuclear detection equipment as of October 30, 2007. DHS acknowledged that current testing is mainly limited to DHS and DOD CBRN detection systems under development, and it has no process to validate the performance of commercial CBRN detection equipment. However, we are not recommending that DHS test all available commercial detection equipment. We are recommending that DHS independently test and evaluate detection equipment first responders purchase using DHS grant funds. (DHS’s comments appear in appendix III.) In DOC’s general comments on our draft report, DOC stated that it believed that even with the implementation of our recommendations aimed at improving IMAAC operations, the plume models will still have several limitations as a primary tool for tracking the release of CBRN materials in urban areas. To improve information available for emergency managers, DOC suggested offering a recommendation that DHS work with the federal plume modeling community to accelerate research and development to address plume model deficiencies in urban areas. Such efforts should include improvements to meteorological information, plume models, and data sets to evaluate plume models. DOC acknowledged that these improvements would be likely to take several years, but work should be initiated while IMAAC is instituting improvements. We believe that DOC’s recommendation has merit and have included it in our final report for DHS’s consideration. DOC also stated that it believed that IMAAC should be working to improve federal modeling and assessment capabilities and to enhance the national scientific capability through cooperation among the federal agencies for incidents of national significance. IMAAC and the atmospheric transport and diffusion community should support OFCM in developing a joint model development and evaluation strategy. We also agree that IMAAC should continue to improve federal modeling and assessment capabilities with OFCM and other federal agencies involved with modeling terrorist-related or accidental releases of CBRN materials in urban areas. This is included in our recommendation. In technical comments on our draft report, IMAAC operations staff at LLNL stressed that improvements to plume modeling information and predictions are best achieved by establishing trusted working relationships with federal, state, and local agency operations centers and deployed assets. DOC also stated that the inference in our report that IMAAC will be providing a single dispersion solution is misleading. IMAAC, as a federal entity, provides a recommendation to the local incident commander and the commander decides what information to use. This stems from the basis that all events are local in nature. DOC stated that it believed that the report should also highlight the need to promote an aggressive program of educating first responder and local incident commanders in the use of dispersion models. We clarified our discussion in the report about the role of IMAAC in order to remove any inference that it was expected to provide a single dispersion solution. We noted in our draft report that IMAAC does not replace or supplant the atmospheric transport and dispersion modeling activities of other agencies whose modeling activities support their missions. IMAAC provides a single point for the coordination and dissemination of federal dispersion modeling and hazard prediction products that represent the federal position during actual or potential incidents requiring federal coordination. We also noted in our conclusions that TOPOFF exercise results demonstrated the larger problem of the confusion among first responders’ awareness about the timing, value, and limitations of plume models and other analyses following a CBRN event. We agree that an aggressive program for educating first responders on the use of dispersion models is needed. DOC also commented on our discussion about the confusion from the models produced during the TOPOFF 2 exercise. DOC noted that the confusion resulted from models being generated using different meteorological inputs—real weather versus “canned” weather. We noted in our draft report that one major cause for the confusion was the use of different meteorological inputs in the modeling conducted during TOPOFF 2. (DOC’s comments appear in app. IV.) We also received technical comments from DHS and DOC, from DOD, and from DOE (LLNL), and we made changes to the report where appropriate. Technical comments we received from LLNL, in particular, proposed broadening the recommendation related to revising IMAAC standard operating procedures to deal with contradictory modeling inputs. IMAAC operations staff at LLNL believed that integrating procedures with other emergency response agencies are the key to clarifying plume modeling information. They stated that their experience has shown that refining IMAAC’s standard operating procedures is relatively ineffective unless this is coordinated with the development of joint operating procedures with other agencies, leading to the incorporation of IMAAC into these agencies’ standard operations. We agreed and have revised our recommendation accordingly. We are sending copies of this report to the Secretaries of Commerce, Defense, Energy, and Homeland Security and others who are interested. We will also provide copies to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please call me at (202) 512-2700. Key contributors to this assignment were Sushil Sharma, Assistant Director, Jason Fong, Timothy Carr, and Penny Pickett. James J. Tuite III, a consultant to GAO during our engagement, provided technical expertise. To assess the capabilities and limitations of chemical, biological, radiological, and nuclear (CBRN) detection equipment, we interviewed federal program officials from the (1) Science and Technology directorate of the Department of Homeland Security (DHS) and its Homeland Security Advanced Research Projects Agency; (2) the Defense Threat Reduction Agency and the Joint Program Executive Office for Chemical and Biological Defense in the Department of Defense (DOD); and (3) the Department of Energy’s (DOE) Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, and Oak Ridge National Laboratory. We also met with program officials from DHS’s Responder Knowledge Base (RKB) and the Department of Commerce’s (DOC) National Institute of Standards and Technology’s Office of Law Enforcement Standards (OLES) to obtain information on equipment standards and the testing of CBRN detection equipment. We reviewed DHS, DOD, and DOE detection programs in place and being developed, as well as these agencies’ studies on CBRN detection systems. We attended conferences and workshops on CBRN detection technologies. To obtain information on detection equipment standards and the testing of CBRN detection equipment for first responders, we met with program officials from DHS’s RKB and OLES. We also interviewed local responders in Connecticut, New Jersey, and Washington on their acquisition of CBRN detection equipment. We chose these states because of their participation in DHS-sponsored Top Officials (TOPOFF) national counterterrorism exercises. In addition, we interviewed members of the InterAgency Board for Equipment Standardization and Interoperability (IAB). IAB, made up of local, state, and federal first responders, is designed to establish and coordinate local, state, and federal standardization; interoperability; compatibility; and responder health and safety to prepare for, train for and respond to, mitigate, and recover from any CBRN incident. To assess the limitations of plume models, we interviewed modeling experts from DHS, DOD, DOE’s national laboratories, DOC’s National Oceanic and Atmospheric Administration, and the Office of the Federal Coordinator for Meteorological Service and Supporting Research (OFCM) in the Department of Commerce. We also interviewed operations staff of the Interagency Modeling and Atmospheric Assessment Center (IMAAC) at LLNL. IMAAC consolidates and integrates federal efforts to model the behavior of various airborne releases and is the source of hazards predictions during response and recovery. We also interviewed local responders in Connecticut, New Jersey, and Washington regarding the use of plume models during the TOPOFF 2 and TOPOFF 3 exercises. We reviewed documentation on the various plume models and reports and studies evaluating models available for tracking CBRN releases in urban environments and studies identifying future needs and priorities for modeling homeland security threats. We attended several conferences and users’ workshops sponsored by the American Meteorological Society, DOD, OFCM, and George Mason University, where modeling capabilities were evaluated. We also reviewed DHS internal reports on lessons learned from the use of modeling during the TOPOFF national exercises. To determine what information first responders have for determining the effects of exposure to CBRN materials on heterogeneous civilian populations, we reviewed agency documentation and studies on urban land use and population density. We also reviewed documentation on acute exposure guideline levels published by the Environmental Protection Agency and other organizations. In addition, we reviewed studies on human toxicity estimates by the U.S. Army and DOE’s national laboratories. We conducted our review from July 2004 to January 2008 in accordance with generally accepted government auditing standards. High fever, chills, headache, spitting up blood, and toxemia, progressing rapidly to shortness of breath and cyanosis (bluish coloration of skin and membranes) | First responders are responsible for responding to terrorist-related and accidental releases of CBRN materials in urban areas. Two primary tools for identifying agents released and their dispersion and effect are equipment to detect and identify CBRN agents in the environment and plume models to track the dispersion of airborne releases of these agents. GAO reports on the limitations of the CBRN detection equipment, its performance standards and capabilities testing, plume models available for tracking urban dispersion of CBRN materials, and information for determining how exposure to CBRN materials affects urban populations. To assess the limitations of CBRN detection equipment and urban plume modeling for first responders' use, GAO met with and obtained data from agency officials and first responders in three states. While the Department of Homeland Security (DHS) and other agencies have taken steps to improve homeland defense, local first responders still do not have tools to accurately identify right away what, when, where, and how much chemical, biological, radiological, or nuclear (CBRN) materials are released in U.S. urban areas, accidentally or by terrorists. Equipment local first responders use to detect radiological and nuclear material cannot predict the dispersion of these materials in the atmosphere. No agency has the mission to develop, certify, and test equipment first responders can use for detecting radiological materials in the atmosphere. According to DHS, chemical detectors are marginally able to detect an immediately dangerous concentration of chemical warfare agents. Handheld detection devices for biological agents are not reliable or effective. DHS's BioWatch program monitors air samples for biothreat agents in selected U.S. cities but does not provide first responders with real-time detection capability. Under the BioWatch system, a threat agent is identified within several hours to more than 1 day after it is released, and how much material is released cannot be determined. DHS has adopted few standards for CBRN detection equipment and has no independent testing program to validate whether it can detect CBRN agents at the specific sensitivities manufacturers claim. DHS has a mission to develop, test, and certify first responders' CB detection equipment, but its testing and certification cover equipment DHS develops, not what first responders buy. Interagency studies show that federal agencies' models to track the atmospheric release of CBRN materials have major limitations in urban areas. DHS's national TOPOFF exercises have demonstrated first responders' confusion over competing plume models' contradictory results. The Interagency Modeling and Atmospheric Assessment Center (IMAAC), created to coordinate modeling predictions, lacks procedures to resolve contradictory predictions. Evaluations and field testing of plume models developed for urban areas show variable predictions in urban environments. They are limited in obtaining accurate data on the characteristics and rate of CBRN material released. Data on population density, land use, and complex terrain are critical to first responders, but data on the effects of exposure to CBRN materials on urban populations have significant gaps. Scientific research is lacking on how low-level exposure to CBRN material affects civilian populations, especially elderly persons, children, and people whose immune systems are compromised. |
Titles XVIII and XIX of the Social Security Act establish minimum requirements that all nursing homes must meet to participate in the Medicare and Medicaid programs, respectively. With the passage of OBRA ‘87, Congress responded to growing concerns about the quality of care that nursing home residents received by requiring major reforms in the federal regulation of nursing homes. Among other things, these reforms revised care requirements that facilities must meet to participate in the Medicare or Medicaid programs, modified the survey process for certifying a home’s compliance with federal standards, and introduced additional sanctions and decertification procedures for homes that fail to meet federal standards. Following OBRA ‘87, CMS published a series of regulations and transmittals to implement the changes. Key implementation actions have included the following: In October 1990, CMS implemented new survey standards; in July 1995, it established enforcement actions for nursing homes found to be out of compliance; and it enhanced oversight through more rigorous federal monitoring surveys beginning in October 1998 and annual state performance reviews in fiscal year 2001. CMS has continued to revise and refine many of these actions since their initial implementation. Every nursing home receiving Medicare or Medicaid payment must undergo a standard survey not less than once every 15 months, and the statewide average interval for these surveys must not exceed 12 months. During a standard survey, separate teams of surveyors conduct a comprehensive assessment of federal quality-of-care and life safety requirements. In contrast, complaint investigations, also conducted by surveyors, generally focus on a specific allegation regarding resident care or safety. The quality-of-care component of a survey focuses on determining whether (1) the care and services provided meet the assessed needs of the residents and (2) the home is providing adequate quality care, including preventing avoidable pressure sores, weight loss, and accidents. Nursing homes that participate in Medicare and Medicaid are required to periodically assess residents’ care needs in 17 areas, such as mood and behavior, physical functioning, and skin conditions, in order to develop an appropriate plan of care. Such resident assessment data are known as the minimum data set (MDS). To assess the care provided by a nursing home, surveyors select a sample of residents and (1) review data derived from the residents’ MDS assessments and medical records; (2) interview nursing home staff, residents, and family members; and (3) observe care provided to residents during the course of the survey. CMS establishes specific investigative protocols for state survey teams—generally consisting of registered nurses, social workers, dieticians, and other specialists—to use in conducting surveys. These procedural instructions are intended to make the on-site surveys thorough and consistent across states. The life safety component of a survey focuses on a home’s compliance with federal fire safety requirements for health care facilities. The fire safety requirements cover 18 categories, ranging from building construction to furnishings. Most states use fire safety specialists within the same department as the state survey agency to conduct fire safety inspections, but some states contract with their state fire marshal’s office. Complaint investigations provide an opportunity for state surveyors to intervene promptly if problems arise between standard surveys. Complaints may be filed against a home by a resident, the resident’s family, or a nursing home employee either verbally, via a complaint hotline, or in writing. Surveyors generally follow state procedures when investigating complaints but must comply with certain federal guidelines and time frames. In cases involving resident abuse, such as pushing, slapping, beating, or otherwise assaulting a resident by individuals to whom their care has been entrusted, state survey agencies may notify state or local law enforcement agencies that can initiate criminal investigations. States must maintain a registry of qualified nurse aides, the primary caregivers in nursing homes, that includes any findings that an aide has been responsible for abuse, neglect, or theft of a resident’s property. The inclusion of such a finding constitutes a ban on nursing home employment. Effective July 1995, CMS established a classification system for deficiencies identified during either standard surveys or complaint investigations. Deficiencies are classified in 1 of 12 categories according to their scope (i.e., the number of residents potentially or actually affected) and their severity. An A-level deficiency is the least serious and is isolated in scope, while an L-level deficiency is the most serious and is considered to be widespread in the nursing home (see table 1). States are required to enter information about surveys and complaint investigations, including the scope and severity of deficiencies identified, in CMS’s OSCAR database. In an effort to better ensure that nursing homes achieve and maintain compliance with the new survey standards, OBRA ‘87 expanded the range of enforcement sanctions. Prior to OBRA ‘87, the only sanctions available were terminations from Medicare or Medicaid or, under certain circumstances, DPNAs. OBRA ‘87 added several new alternative sanctions, such as civil money penalties (CMP) and requiring training for staff providing care to residents, and expanded the types of deficiencies that could result in DPNAs. To implement OBRA ‘87, CMS published enforcement regulations, effective July 1995. According to these regulations, the scope and severity of a deficiency determine the applicable sanctions. CMS imposes sanctions on homes with Medicare or dual Medicare and Medicaid certification on the basis of state referrals. CMS normally accepts a state’s recommendation for sanctions but can modify it. Effective January 2000, CMS required states to refer for immediate sanction homes found to have harmed one or a small number of residents or to have a pattern of harming or exposing residents to actual harm or potential death or serious injury (G-level or higher deficiencies on the agency’s scope and severity grid) on successive surveys. This is known as the double G immediate sanctions policy. Additionally, in January 1999, CMS launched the Special Focus Facility program. This initiative was intended to increase the oversight of homes with a history of providing poor care. When CMS established this program, it instructed each state to select two homes for enhanced monitoring. For these homes, states are to conduct surveys at 6-month intervals rather than annually. In December 2004, CMS expanded this program to require immediate sanctions for those homes that fail to significantly improve their performance from one survey to the next and termination for homes with no significant improvement after three surveys over an 18-month period. Unlike other sanctions, CMPs do not require a notification period before they go into effect. However, if a nursing home appeals the deficiency, by statute, payment of the CMP—whether received directly from the home or withheld from the home’s Medicare and Medicaid payments—is deferred until the appeal is resolved. In contrast to CMPs, other sanctions, including DPNAs, cannot go into effect until homes have been provided a notice period of at least 15 days, according to CMS regulations; the notice period is shortened to 2 days in the case of immediate jeopardy. Although nursing homes can be terminated involuntarily from participation in Medicare and Medicaid, which can result in a home’s closure, termination is used infrequently. CMS is responsible for overseeing each state survey agency’s performance in ensuring quality of care in nursing homes participating in Medicare or Medicaid. Its primary oversight tools are (1) statutorily required federal monitoring surveys and (2) annual state performance reviews. Pursuant to OBRA ‘87, CMS is required to conduct annual monitoring surveys in at least 5 percent of the state-surveyed Medicare and Medicaid nursing homes in each state, with a minimum of five facilities in each state. These federal monitoring surveys can be either comparative or observational. A comparative survey involves a federal survey team conducting a complete, independent survey of a home within 2 months of the completion of a state’s survey in order to compare and contrast the findings. In an observational survey, one or more federal surveyors accompany a state survey team to a nursing home to observe the team’s performance. State performance reviews measure state survey agency compliance with seven standards: timeliness of the survey, documentation of survey results, quality of state agency investigations and decision making, timeliness of enforcement actions, budget analysis, timeliness and quality of complaint investigations, and timeliness and accuracy of data entry. These reviews replaced state self-reporting of their compliance with federal requirements. A small but significant proportion of nursing homes nationwide continue to experience quality-of-care problems—as evidenced by the almost 1 in 5 nursing homes nationwide that were cited for serious deficiencies in 2006—despite the reforms of OBRA ‘87 and subsequent efforts by CMS and the nursing home industry to improve the quality of nursing home care. Although there has been an overall decline in the numbers of nursing homes found to have serious deficiencies since fiscal year 2000, variation among states in the proportion of homes with serious deficiencies indicates state survey agencies are not consistently conducting surveys. Challenges associated with the recruitment and retention of state surveyors, combined with increased surveyor workloads, can affect survey consistency. In addition, federal comparative surveys conducted after state surveys found more serious quality-of-care problems than were cited by state surveyors. Although understatement of serious deficiencies identified by federal surveyors in five states has declined since 2004, understatement continues at varying levels across these states. CMS data indicate an overall decline in reported serious deficiencies from fiscal year 2000 through 2006. The proportion of nursing homes nationwide cited with serious deficiencies declined from 28 percent in fiscal year 2000 to a low of 16 percent in 2004, and then increased to 19 percent in fiscal year 2006 (see fig. 1). Despite this national trend, significant interstate variation in the proportion of homes with serious deficiencies indicates that states conduct surveys inconsistently. (App. II shows the percentage of homes, by state, cited for serious deficiencies in standard surveys across a 7-year period.). In fiscal year 2006, 6 states identified serious deficiencies in 30 percent or more of homes surveyed, 16 states found such deficiencies in 20 to 30 percent of homes, 22 found these deficiencies in 10 to 19 percent of homes, and 7 found these deficiencies in less than 10 percent of homes. For example, in fiscal year 2006, the percentage of nursing homes cited for serious deficiencies ranged from a low of approximately 2 percent in one state to a high of almost 51 percent in another state. The inconsistency of state survey findings may reflect challenges in recruiting and retaining state surveyors and increasing state surveyor workloads. We reported in 2005 that, according to state survey agency officials, it is difficult to retain surveyors and fill vacancies because state survey agency salaries are rarely competitive with the private sector. Moreover, the first year for a new surveyor is essentially a training period with low productivity. It can take as long as 3 years for a surveyor to gain sufficient knowledge, experience, and confidence to perform the job well. We also reported that limited experience levels of state surveyors resulting from high turnover rates was a contributing factor to (1) variability in citing actual harm or higher-level deficiencies and (2) understatement of such deficiencies. In addition, the implementation of CMS’s nursing home initiatives has increased state survey agencies’ workload. States are now required to conduct on-site revisits to ensure serious deficiencies have been corrected, promptly investigate complaints alleging actual harm on- site, and initiate off-hour standard surveys in addition to quality-of-care surveys. As a result, surveyor presence in nursing homes has increased and surveyor work hours have effectively been expanded to weekends, evenings, and early mornings. In addition, data from federal comparative surveys indicate that quality-of- care problems remain for a significant proportion of nursing homes. In fiscal year 2006, 28 percent of federal comparative surveys found more serious deficiencies than did state quality-of-care surveys. Since 2002, federal surveyors have found serious deficiencies in 21 percent or more of comparative surveys that were not cited in corresponding state quality-of- care surveys (see fig. 2). However, some serious deficiencies found by federal, but not state surveyors, may not have existed at the time the state survey occurred. In December 2005, we reported on understatement of serious deficiencies in five states—California, Florida, New York, Ohio, and Texas—from March 2002 through December 2004. We selected these states for our analysis because the percentage of their state surveys that cited serious deficiencies decreased significantly from January 1999 through January 2005. Our analysis of more recent data from these states showed that understatement of serious deficiencies continues at varying levels. Altogether, we examined 139 federal comparative surveys conducted from March 2002 through March 2007 in the five states. Understatement of serious deficiencies decreased from 18 percent for federal comparative surveys during the original time period to 11 percent for federal comparative surveys during the period January 2005 through March 2007. Federal comparative surveys for Florida and Ohio for this most recent time period found that state surveys had not missed any serious deficiencies; however, since 2004 all five states experienced increases in the percentage of homes cited with serious deficiencies on state surveys (see app. II). Understatement of serious deficiencies varied across these five states, as the percentage of serious missed deficiencies ranged from a low of 4 percent in Ohio to a high of 26 percent in New York during the 5- year period March 2002 to March 2007. Figure 3 summarizes our analysis by state, from March 2002 through March 2007. CMS has strengthened its enforcement capabilities since OBRA ‘87 by, for example, implementing additional sanctions and an immediate sanctions policy for nursing homes found to repeatedly harm residents and developing a new enforcement management data system; however, several key initiatives require refinement. The immediate sanctions policy is complex and appears to have induced only temporary compliance in certain nursing homes with histories of repeated noncompliance. The term “immediate sanctions” is misleading because the policy requires only that homes be notified immediately of CMS’s intent to implement sanctions, not that sanctions must be implemented immediately. Furthermore, when a sanction is implemented, there is a lag time between when the deficiency citation occurs and the sanction’s effective date. In addition to the immediate sanctions policy, CMS has taken other steps that are intended to address enforcement weaknesses, but their effectiveness remains unclear. Finally, although CMS has developed a new data system, the system’s components are not integrated and the national reporting capabilities are incomplete, hampering the agency’s ability to track and monitor enforcement. Despite CMS’s efforts to strengthen federal enforcement policy, it has not deterred some homes from repeatedly harming residents. Effective January 2000, CMS implemented its double G immediate sanctions policy. The policy is complex and does not always appear to deter noncompliance, nor are the sanctions always implemented immediately. We recently reported that the immediate sanctions policy’s complex rules, and the exceptions they include, allowed homes to escape immediate sanctions even if they repeatedly harmed residents. CMS acknowledged that the complexity of the policy may be an inherent limitation and indicated that it intends to either strengthen the policy or replace it with a policy that achieves similar goals through alternative methods. In addition to the complexity of the policy, it does not appear to always deter noncompliance. We recently reported that our review of 63 homes with prior serious quality problems in four states indicated that sanctions may have induced only temporary compliance in these homes because surveyors found that many of the homes with implemented sanctions were again out of compliance on subsequent surveys. From fiscal year 2000 through 2005, 31 of these 63 homes cycled in and out of compliance more than once, harming residents, even after sanctions had been implemented, including 8 homes that did so seven times or more. During this same time period, 27 of the 63 homes were cited 69 times for deficiencies that warranted immediate sanctions, but 15 of these cases did not result in immediate sanctions. We also recently reported that the term “immediate sanctions” is misleading because the policy is silent on how quickly sanctions should be implemented and there is a lag time between the state’s identification of deficiencies during the survey and when the sanction (i.e., a CMP or DPNA) is implemented (i.e., when it goes into effect). The immediate sanctions policy requires that sanctions be imposed immediately. A sanction is considered imposed when a home is notified of CMS’s intent to implement a sanction—15 days from the date of the notice. If during the 15-day notice period the nursing home corrects the deficiencies, no sanction is implemented. Thus, nursing homes have a de facto grace period. In addition, there is a lag time between the state’s identification of deficiencies and the implementation of a sanction. CMS implemented about 68 percent of the DPNAs for double Gs among the homes we reviewed during fiscal year 2000 through 2005 more than 30 days after the survey. In contrast, CMPs can go into effect as early as the first day the home was out of compliance, even if that date is prior to the survey date because, unlike DPNAs, CMPs do not require a notice period. About 98 percent of CMPs imposed for double Gs took effect on or before the survey date. However, the deterrent effect of CMPs was diluted because CMS imposed CMPs at the lower end of the allowable range for the homes we reviewed. For example, the median per day CMP amount imposed for deficiencies that do not cause immediate jeopardy to residents was $500 in fiscal year 2000 through 2002 and $350 in fiscal year 2003 through 2005; the allowable range is $50 to $3,000 per day. Although CMPs can be implemented closer to the date of survey than DPNAs, the immediacy and the effect of CMPs may be diminished by (1) the significant time that can pass between the citation of deficiencies on a survey and the home’s payment of the CMP and (2) the low amounts imposed, as described earlier. By statute, payment of CMPs is delayed until appeals are exhausted. For example, one home we reviewed did not pay its CMP of $21,600 until more than 2 years after a February 2003 survey had cited a G-level deficiency. This citation was a repeat deficiency: less than a month earlier, the home had received another G-level deficiency in the same quality-of-care area. This finding is consistent with a 2005 report from the Department of Health and Human Services’ (HHS) Office of Inspector General that found that the collection of CMPs in appealed cases takes an average of 420 days—a 110 percent increase in time over nonappealed cases—and “consequently, nursing homes are insulated from the repercussions of enforcement by well over a year.” CMS has taken additional steps intended to improve enforcement of nursing home quality requirements; however, the extent to which—or when—these initiatives will address enforcement weaknesses remains unclear. First, to ensure greater consistency in CMP amounts proposed by states and imposed by regions, CMS, in conjunction with state survey agencies, developed a grid that provides guidance for states and regions. The CMP grid lists ranges for minimum CMP amounts while allowing for flexibility to adjust the penalties for factors such as the deficiency’s scope and severity, the care areas where the deficiency was cited, and a home’s past history of noncompliance. In August 2006, CMS completed the regional office pilot of its CMP grid but had not completed its analysis of the pilot as of April 2007. CMS plans to disseminate the final grid to states soon. Second, in December 2004, CMS expanded the Special Focus Facility program from about 100 homes to include about 135 homes. CMS also modified the program by requiring immediate sanctions for those homes that failed to significantly improve their performance from one survey to the next and by requiring termination for homes with no significant improvement after three surveys over an 18-month period. According to CMS, 11 Special Focus Facilities were terminated in fiscal year 2005 and 7 were terminated in fiscal year 2006. Despite the expansion of the program, many homes that could benefit from enhanced oversight and enforcement are still excluded from the program. For example, of the 63 homes with prior serious quality problems that we recently reviewed, only 2 were designated Special Focus Facilities in 2005, and the number increased to 4 in 2006. In March 1999, we reported that CMS lacked a system for effectively integrating enforcement data nationwide and that the lack of such a system weakened oversight. Since 1999, CMS has made progress developing such a system—ASPEN Enforcement Manager (AEM)—and, since October 1, 2004, CMS has used AEM to collect state and regional data on sanctions and improve communications between state survey agencies and CMS regional offices. CMS expects that the data collected in AEM will enable states, CMS regional offices, and the CMS central office to more easily track and evaluate sanctions against nursing homes as well as respond to emerging issues. Developed by CMS’s central office primarily for use by states and regions, AEM is one of many modules of a broader data collection system called ASPEN. However, the ASPEN modules—and other data systems related to enforcement such as the financial management system for tracking CMP collections—are fragmented and lack automated interfaces with each other. As a result, enforcement officials must pull discrete bits of data from the various systems and manually combine the data to develop a full enforcement picture. Furthermore, CMS has not defined a plan for using the AEM data to inform the tracking and monitoring of enforcement through national enforcement reports. While CMS is developing a few such reports, it has not developed a concrete plan and timeline for producing a full set of reports that use the AEM data to help assess the effectiveness of sanctions and its enforcement policies. In addition, while the full complement of enforcement data being recorded by the states and regional offices in AEM is now being uploaded to CMS’s national system, CMS does not intend to upload any historical data, which could greatly enhance enforcement monitoring efforts. Finally, AEM has quality control weaknesses, such as the lack of systematic quality control mechanisms to ensure accuracy of data entry. CMS officials told us they will continue to develop and implement enhancements to AEM to expand its capabilities over the next several years. However, until CMS develops a plan for integrating the fragmented systems and for using AEM data—along with other data the agency collects—efficient and effective tracking and monitoring of enforcement will continue to be hampered. As a result, CMS will have difficulty assessing the effectiveness of sanctions and its enforcement policies. CMS oversight of nursing home quality and state surveys has increased significantly through several efforts, but CMS initiatives for nursing home quality oversight continue to compete with each other, as well as with other CMS programs, for staff and financial resources. Since OBRA ‘87 required CMS to annually conduct federal monitoring surveys for a sample of nursing homes to test the adequacy of state surveys, CMS has developed a number of initiatives to strengthen its oversight. These initiatives have increased federal surveyors’ workload and the demand for resources. Greater demand on limited resources has led to queues and delays in certain key initiatives. In particular, the implementation of three key initiatives—the new Quality Indicator Survey (QIS), investigative protocols for quality-of-care problems, and an increase in the number of federal quality-of-care comparative surveys—was delayed because they compete for priority with other CMS projects. CMS has used both federal monitoring surveys and annual state performance reviews to increase its oversight of quality of care in nursing homes. Through these two mechanisms it has focused its resources and attention on (1) prompt investigation of complaints and allegations of abuse, (2) more frequent and timely federal comparative surveys, (3) stronger fire safety standards, and (4) upgrades to data systems. To ensure that complaints and allegations of abuse are investigated and addressed in accordance with OBRA ‘87, CMS has issued guidance and taken other steps. CMS guidance issued since 1999 has helped strengthen state procedures for investigating complaints. For example, CMS instructed states to investigate complaints alleging harm to a resident within 10 workdays; previously states could establish their own time frames for complaints at this level of severity. In addition, CMS guidance to states in 2002 and 2004 clarified policies on reporting abuse, including requiring notification of local law enforcement and Medicaid Fraud Control Units, establishing time frames, and citing abuse on surveys. CMS has taken three additional steps to improve its oversight of state complaint investigations, including allegations of abuse. First, in its annual state performance reviews implemented in 2002, it required that federal surveyors review a sample of complaints in each state. These reviews were done to determine whether states (1) properly categorized complaints in terms of how quickly they should be investigated, (2) investigated complaints within the time specified, and (3) properly included the results of the investigations in CMS’s database. Second, in January 2004, CMS implemented a new national automated complaint tracking system, the ASPEN Complaints and Incidents Tracking System. The lack of a national complaint reporting system had hindered CMS’s and states’ ability to adequately track the status of complaint investigations and CMS’s ability to maintain a full compliance history on each nursing home. Third, in November 2004, CMS requested state survey agency directors to self-assess their states’ compliance with federal requirements for maintaining and operating nurse aide registries. CMS has not issued a formal report of findings from these assessments, but in 2005 we reported that CMS officials noted that resource constraints have impeded states’ compliance with certain federal requirements. As a part of this effort, CMS is also conducting a Background Check Pilot Program. The pilot program will test the effectiveness of state and national fingerprint-based background checks on employees of long-term care facilities, including nursing homes. CMS has increased the number of federal comparative surveys for both quality of care and fire safety and decreased the time between the end of the state survey and the start of the federal comparative surveys. These improvements allow CMS to better distinguish between serious problems missed by state surveyors and changes in the home that occurred after the state survey. The number of comparative quality-of-care surveys nationwide per year increased from about 10 surveys a year during the 24-month period prior to October 1998 to about 160 per year for fiscal years 2005 and 2006. The number of fire safety comparative surveys increased as well from 40 in fiscal year 2003 to 536 in fiscal year 2006. In addition, the average elapsed time between state and comparative quality- of-care surveys has decreased from 33 calendar days for the 64 comparative surveys we reviewed in 1999 to 26 days for all federal comparative surveys completed through fiscal year 2006. In addition to conducting more frequent federal comparative surveys for fire safety, CMS has strengthened fire safety standards. In response to a recommendation in our July 2004 report to strengthen fire safety standards, CMS issued a final rule in September 2006 requiring nonsprinklered nursing homes to install battery-powered smoke detectors in resident rooms and common areas. In addition, CMS has issued a proposed rule that would require all nursing homes to be equipped with sprinkler systems and, after reviewing public comment, intends to publish a final version of the rule and stipulate an effective date for all homes to comply. CMS has pursued important upgrades to data systems, expanded dissemination of data and information, and addressed accuracy issues in the MDS in addition to implementing complaint and enforcement systems. One such upgrade increased state and federal surveyors’ access to OSCAR data. CMS now uses OSCAR data to produce periodic reports to monitor both state and federal survey performance. Some reports, such as survey timeliness, are used during state performance reviews, while others are intended to help identify problems or inconsistencies in state survey activities and the need for intervention. In addition, CMS created a Web- accessible software program called Providing Data Quickly (PDQ) that allows regional offices and state survey agencies easier access to standard OSCAR reports, including one that identifies the homes that have repeatedly harmed residents and meet the criteria for imposition of immediate sanctions. Since launching its Nursing Home Compare Web site in 1998, CMS has expanded its dissemination of information to the public on individual nursing homes participating in Medicare or Medicaid. In addition to data on any deficiencies identified during standard surveys, the Web site now includes data on the results of complaint investigations, information on nursing home staffing levels, and quality measures, such as the percentage of residents with pressure sores. On the basis of our recommendations, CMS is now reporting fire safety deficiencies on the Web site, including information on whether a home has automatic sprinklers to suppress a fire, and may include information on impending sanctions in the future. However, CMS continues to address ongoing problems with the accuracy and reliability of some of the underlying data. For example, CMS has evaluated the validity of quality measures and staffing information it makes available on the Web, and it has removed or excluded questionable data. In addition to building the quality measures reported on Nursing Home Compare, the MDS data are the basis for patient care plans, adjusting Medicare nursing home payments as well as Medicaid payments in some states, and assisting with quality oversight. Thus the accuracy of the MDS has implications for the identification of quality problems and the level of nursing home payments. OBRA ‘87 required nursing homes that participate in the Medicare and Medicaid programs to perform periodic resident assessments; these resident assessments are known as the MDS. In February 2002, we assessed federal government efforts to ensure the accuracy of the MDS data. We reported that on-site reviews of MDS data that compared the MDS to supporting documentation were a very effective method of assessing the accuracy of the data. However, CMS’s efforts to ensure the accuracy of the underlying MDS data were too reliant on off- site reviews, which were limited to documentation reviews or data analysis. To ensure the accuracy of the MDS, CMS signed a new contract for on-site reviews in September 2005; these reviews are ongoing. CMS initiatives for nursing home quality oversight continue to compete with each other, as well as with other CMS programs, for staff and financial resources. Greater nursing home oversight and growth in the number of Medicare and Medicaid providers has created increased demand for staff and financial resources. Greater demand on limited resources has led to queues and delays in key initiatives. Three key initiatives—the new Quality Indicator Survey (QIS), investigative protocols for quality-of-care problems, and an increase in the number of federal quality-of-care comparative surveys—were delayed because they compete for priority with other CMS projects. The implementation of the QIS, in process for over 8 years, continues to encounter delays because of a lack of resources. The QIS is a two-stage, data-driven, structured survey process intended to systematically target potential problems at nursing homes by using an expanded sample and structured interviews to help surveyors better assess the scope of any identified deficiencies. CMS is currently concluding a five-state demonstration of the QIS system. A preliminary evaluation by CMS indicates that surveyors have spent less time in homes that are performing well, deficiency citations were linked to more defensible documentation, and serious deficiencies were more frequently cited in some demonstration states. However, CMS officials recently reported that resource constraints in fiscal year 2007 threaten the planned expansion of this process beyond the five demonstration states. Although 13 states applied to transition to QIS, resource limitations may prevent this expansion. In addition, at least $2 million is needed over 2 years to develop a production quality software package for the QIS. Since hiring a contractor in 2001 to facilitate convening expert panels for the development and review of new investigative protocols, CMS has implemented eight sets of investigative protocols. In December 2005, we reported that these investigative protocols provided surveyors with detailed interpretive guidance and ensured greater rigor in on-site investigations of specific quality-of-care areas, such as pressure sores, incontinence, and medical director qualifications. However, the issuance of additional protocols was slowed because of lengthy consultation with experts and prolonged delays related to internal disagreement over the structure of the process. Instead, it has returned to the traditional revision process even though agency staff believes that the expert panel process produced a high-quality product. Since issuing several protocols in 2006, CMS has plans to issue two additional protocols. Although CMS hired a contractor in 2003 to further increase the number of federal quality-of-care comparative surveys, it stopped funding this initiative in fiscal year 2006. The agency reallocated the funds to help state survey agencies meet the increased workload resulting from growth in the number of other Medicare providers. About 20 years ago, significant attention from the Special Committee on Aging, the Institute of Medicine, and others served as a catalyst to focus national attention on nursing home quality issues, culminating in the nursing home reform provisions of OBRA ‘87. Beginning in 1998, the Committee again served as a catalyst to focus national attention on the fact that the task was not complete; through a series of hearings, it held the various stakeholders publicly accountable for the substandard care reported in a small but significant share of nursing homes nationwide. Since then, in response to many GAO recommendations and on its own initiative, CMS has taken many important steps and invested resources to respond in a timelier, more rigorous, and more consistent manner to identified problems and improve its oversight process for the care of vulnerable nursing home residents. This is admittedly no small undertaking, given the large number and diversity of stakeholders and caregivers involved at the federal, state, and provider levels. Nevertheless, despite the passage of time and the level of investment and effort, the work begun after OBRA ‘87 is still not complete. It is important to continue to focus national attention on and ensure public accountability for homes that harm residents. With these ongoing efforts, the momentum of earlier initiatives can be sustained and perhaps even enhanced and the quality of care for nursing home residents can be secured, as intended by Congress when it passed this legislation. Mr. Chairman, this concludes my prepared remarks. I would be pleased to respond to any questions that you or other Members of the Committee may have. For future contacts regarding this testimony, please contact Kathryn G. Allen at (202) 512-7118 or at allenk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Walter Ochinko, Assistant Director; Kaycee M. Glavich; Leslie V. Gordon; K. Nicole Haeberle; Daniel Lee; and Elizabeth T. Morrison made key contributions to this statement. Table 2 summarizes our recommendations from 11 reports on nursing home quality and safety, issued from July 1998 through March 2007; CMS’s actions to address weaknesses we identified; and the implementation status of CMS’s initiatives as of April 2007. The recommendations are grouped into four categories—surveys, complaints, enforcement, and oversight. If a report contained recommendations related to more than one category, the report appears more than once in the table. For each report, the first two numbers identify the fiscal year in which the report was issued. For example, HEHS-98-202 was released in 1998. The Related GAO Products section at the end of this statement contains the full citation for each report. Of our 42 recommendations, CMS has fully implemented 18, implemented only parts of 7, is taking steps to implement 10, and declined to implement 7. In order to identify trends in the percentage of nursing homes cited with actual harm or immediate jeopardy deficiencies, we analyzed data from CMS’s OSCAR database for fiscal years 2000 through 2006 (see table 3). Because surveys are conducted at least every 15 months (with a required 12-month statewide average), it is possible that a home was surveyed twice in any time period. To avoid double counting of homes, we included only homes’ most recent survey from each period. Nursing Homes: Efforts to Strengthen Federal Enforcement Have Not Deterred Some Homes from Repeatedly Harming Residents. GAO-07-241. Washington, D.C.: March 26, 2007. Nursing Homes: Despite Increased Oversight, Challenges Remain in Ensuring High-Quality Care and Resident Safety. GAO-06-117. Washington, D.C.: December 28, 2005. Nursing Home Deaths: Arkansas Coroner Referrals Confirm Weaknesses in State and Federal Oversight of Quality of Care. GAO-05-78. Washington, D.C.: November 12, 2004. Nursing Home Fire Safety: Recent Fires Highlight Weaknesses in Federal Standards and Oversight. GAO-04-660. Washington D.C.: July 16, 2004. Nursing Home Quality: Prevalence of Serious Problems, While Declining, Reinforces Importance of Enhanced Oversight. GAO-03-561. Washington, D.C.: July 15, 2003. Nursing Homes: Public Reporting of Quality Indicators Has Merit, but National Implementation Is Premature. GAO-03-187. Washington, D.C.: October 31, 2002. Nursing Homes: Quality of Care More Related to Staffing than Spending. GAO-02-431R. Washington, D.C.: June 13, 2002. Nursing Homes: More Can Be Done to Protect Residents from Abuse. GAO-02-312. Washington, D.C.: March 1, 2002. Nursing Homes: Federal Efforts to Monitor Resident Assessment Data Should Complement State Activities. GAO-02-279. Washington, D.C.: February 15, 2002. Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives. GAO/HEHS-00-197. Washington, D.C.: September 28, 2000. Nursing Home Care: Enhanced HCFA Oversight of State Programs Would Better Ensure Quality. GAO/HEHS-00-6. Washington, D.C.: November 4, 1999. Nursing Home Oversight: Industry Examples Do Not Demonstrate That Regulatory Actions Were Unreasonable. GAO/HEHS-99-154R. Washington, D.C.: August 13, 1999. Nursing Homes: Proposal to Enhance Oversight of Poorly Performing Homes Has Merit. GAO/HEHS-99-157. Washington, D.C.: June 30, 1999. Nursing Homes: Complaint Investigation Processes Often Inadequate to Protect Residents. GAO/HEHS-99-80. Washington, D.C.: March 22, 1999. Nursing Homes: Additional Steps Needed to Strengthen Enforcement of Federal Quality Standards. GAO/HEHS-99-46. Washington, D.C.: March 18, 1999. California Nursing Homes: Care Problems Persist Despite Federal and State Oversight. GAO/HEHS-98-202. Washington, D.C.: July 27, 1998. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | With the Omnibus Budget Reconciliation Act of 1987 (OBRA '87), Congress responded to growing concerns about the quality of care that nursing home residents received by requiring reforms in the federal certification and oversight of nursing homes. These reforms included revising care requirements that homes must meet to participate in the Medicare or Medicaid programs, modifying the survey process for certifying a home's compliance with federal standards, and introducing additional sanctions and decertification procedures for noncompliant homes. GAO's testimony addresses its work in evaluating the quality of nursing home care and the enforcement and oversight functions intended to ensure high-quality care, the progress made in each of these areas since the passage of OBRA '87, and the challenges that remain. GAO's testimony is based on its prior work; analysis of data from the Centers for Medicare & Medicaid Services' (CMS) On-Line Survey, Certification, and Reporting system (OSCAR), which compiles the results of state nursing home surveys; and evaluation of federal comparative surveys for selected states (2005-2007). Federal comparative surveys are conducted at nursing homes recently surveyed by each state to assess the adequacy of the state's surveys. The reforms of OBRA '87 and subsequent efforts by CMS and the nursing home industry to improve the quality of nursing home care have focused on resident outcomes, yet a small but significant share of nursing homes nationwide continue to experience quality-of-care problems. In fiscal year 2006, almost one in five nursing homes was cited for serious deficiencies, those that caused actual harm or placed residents in immediate jeopardy. While this rate has fluctuated over the last 7 years, GAO has found persistent variation in the proportion of homes with serious deficiencies across states. In addition, although the understatement of serious deficiencies--that is, when federal surveyors identified deficiencies that were missed by state surveyors--has declined since 2004 in states GAO reviewed, it has continued at varying levels. CMS has strengthened its enforcement capabilities since OBRA '87 in order to better ensure that nursing homes achieve and maintain high-quality care, but several key initiatives require refinement. CMS has implemented additional sanctions authorized in the legislation, established an immediate sanctions policy for homes found to repeatedly harm residents, and developed a new enforcement management data system. However, the immediate sanctions policy is complex and appears to have induced only temporary compliance in some homes with a history of repeated noncompliance. Furthermore, CMS's new data system's components are not integrated and national reporting capabilities are incomplete, which hamper CMS's ability to track and monitor enforcement. CMS oversight of nursing home quality has increased significantly, but CMS initiatives continue to compete for staff and financial resources. Attention to oversight has led to greater demand on limited resources, and to queues and delays in certain key initiatives. For example, a new survey methodology has been in development for over 8 years and resource constraints threaten the planned expansion of this methodology beyond the initial demonstration states. Significant attention from the Special Committee on Aging, the Institute of Medicine, and others served as a catalyst to focus national attention on nursing home quality issues, culminating in the nursing home reform provisions of OBRA '87. In response to many GAO recommendations and at its own initiative, CMS has taken many important steps; however, the task of ensuring high-quality nursing home care for all residents is not complete. In order to guarantee that all nursing home residents receive high-quality care, it is important to maintain the momentum begun by the reforms of OBRA '87 and continue to focus national attention on those homes that cause actual harm to vulnerable residents. |
SFA manages and administers student financial assistance programs authorized under title IV of the Higher Education Act of 1965, as amended (HEA). These postsecondary programs include the William D. Ford Federal Direct Loan Program (FDLP--often referred to as the "Direct Loan"), the Federal Family Education Loan Program (FFELP--often referred to as the "Guaranteed Loan"), the Federal Pell Grant Program, and campus-based programs. Annually, these programs together provide about $50 billion in student aid to approximately 8 million students and their families. As a consequence, the student financial aid data exchange environment is large and complex. It includes about 5,300 schools authorized to participate in the title IV program, 4,100 lenders, 36 guaranty agencies, as well as other federal agencies. Currently, SFA oversees or directly manages approximately $220 billion in outstanding loans representing about 100 million borrowers. Figure 1 provides an overview of this environment. During the past three decades, the Department of Education has created many nonintegrated information systems to support its growing number of student financial aid programs. In many cases, these systems—maintained and operated by a host of different contractors, on multiple platforms— are unable to easily exchange timely, accurate, and useful information needed to ensure the proper management and oversight of various student aid programs. Table 1 lists SFA’s current inventory of major systems. Beginning in 1992, title IV student financial aid systems integration was the subject of heightened congressional concern. The 1992 HEA amendments required the department to centralize data on student loan indebtedness by integrating databases containing student financial aid program information. In response to this mandate, in January 1993 Education awarded a 5-year, $39-million contract for development and maintenance of NSLDS. The system was to provide information on students across programmatic boundaries, yet problems persisted. Since 1995, because of concerns over Education’s vulnerabilities to losses due to fraud, waste, abuse, and mismanagement, student financial aid has been included on our high-risk list. Studies had shown that Education had used inadequate management information systems containing unreliable data, and that inaccurate loan data were being loaded into NSLDS. In 1997, 4 years after the initiation of the NSLDS contract, data inconsistencies and errors across systems, such as a student’s enrollment status or the amount of loan indebtedness, continued to have a negative impact on the student’s ability to receive aid. Education still lacked an accurate, integrated system for student financial aid data; the nonintegrated systems would sometimes provide conflicting information to the department’s financial aid partners (schools, lenders, guaranty agencies). The department had opted to establish NSLDS as a data repository rather than an integrated database; this meant that while the system could receive and store information from other title IV systems, the lack of uniformity in how the individual systems stored their information— no common student or institutional identifiers or data standards— complicated data-matching among systems. Hence, NSLDS could not be effectively updated (or update other systems) without expensive data conversion programs. As a result, data contained in other systems, operated by a variety of contractors, were often in conflict with data stored in NSLDS due to differences in the timing of updates among the multiple data providers. As also reported in 1997, large amounts of redundant student financial aid data generated by schools, lenders, guaranty agencies, and several internal department systems, were being stored in standalone databases, thereby increasing the cost to administer the various title IV programs. We concluded that these data exchange and storage problems, as well as other program operation and monitoring difficulties, were partly related to the lack of a fully functional integrated database covering all title IV student financial aid programs. In 1998, in part to address these and other longstanding management weaknesses, Congress amended HEA and established SFA as the federal government’s first performance-based organization (PBO). Under the PBO concept, SFA is a discrete organizational unit within the Department of Education, and focuses solely on programmatic—rather than policy— issues, which remain the responsibility of the Secretary of Education. Thus, upon being designated a PBO, SFA was expected to shift from a focus on adherence to required processes to a focus on customers and program results. Moreover, in establishing SFA as a PBO, Congress gave SFA specific personnel hiring authority, including the ability to appoint up to 25 technical and professional employees without regard to provisions governing appointments to the competitive service. Also in conjunction with its PBO status, SFA can seek waivers from governmentwide regulations, policies, and procedures (e.g., acquisition, human capital, and procurement). This flexibility is intended to give SFA greater freedom in achieving their performance goals while maintaining accountability for operational aspects of federal student aid programs. In September 1999, under this PBO procurement authority, SFA hired Accenture (formerly Andersen Consulting) as its "modernization partner," to help it carry out its Modernization Blueprint. Accenture’s role is to provide leadership of critical planning activities essential to the success of SFA’s modernization. As a result of these and other events between 1992 and 1999, the management structure of SFA’s postsecondary education activities was completely reorganized. Under the partnership between the PBO and Accenture, a new systems integration strategy emerged, focusing on the use of middleware software technology to achieve database integration and improve access to and use of SFA’s information. Table 2 lists key events and milestones during the past decade affecting Education’s student financial assistance programs and the systems that support them. Hundreds of organizations around the world have found successful technology integration solutions through the use of middleware, sharing data across different information systems and databases. Middleware is a type of software that enables programs and databases located on different systems to work together as if they all resided in a single database. Often organizations use middleware together with Web-based applications to present users with an integrated view of relevant data over the Internet, without having to develop new systems or database software. The middleware acts as an intermediary that mines data from existing databases and performs any necessary data transformation so that the existing information can be quickly compiled and presented to the user. For instance, middleware is used heavily in the banking industry, particularly for those institutions involved in numerous mergers and acquisitions, as it allows both banks to keep their existing systems, programs, and databases essentially unchanged, while providing users such as branch personnel with a composite view of both customer databases. We contacted three major financial institutions that use the same middleware product adopted by SFA: IBM’s MQSeries. According to these companies, as with SFA, the driving force behind the acquisition of the middleware technology was multiple, incompatible platforms. Overall, banking industry information technology officials with whom we spoke were pleased with the technical capabilities of middleware, but said that the major issue in successfully implementing and maintaining a middleware-based systems environment was retaining skilled employees— whether in-house or via an external contract. According to SFA’s chief operating officer, by using the banking industry as a benchmark for establishing the viability of the middleware approach, SFA was better able to identify the strengths and weaknesses of that approach. He saw the banking industry as analogous to SFA in that it had to successfully address systems interoperability problems and provide users with an integrated data view following mergers. Similarly, we previously noted a gap between the services available to bank customers, and those available to students and their families—such as the ability to view complete account data and make account changes worldwide, across systems, through automated teller machines. SFA’s initiative is in its early stages, and as of July 1, 2001, SFA had made the initial system modifications necessary to use the middleware technology on five systems. In addition, SFA’s contract programmers have been developing software using extensible markup language (XML)13—now becoming an industry standard—that will eventually standardize student grant and loan origination and disbursement requests into a single common record format for all aid programs. Moreover, the enterprise application integration architecture plans and documents are in place that are conducive to the IBM MQSeries middleware product line being used to facilitate data integration across SFA’s different computing platforms. The first use of middleware and XML together for loan originations and disbursements is expected in March 2002, when a single process for delivering Direct Loan and Pell Grant aid to students, called Common Origination and Disbursement (COD), is scheduled for implementation in time for the 2002-2003 school year. In March, SFA plans to provide at least 50 schools that participated in testing using COD with the option of submitting data via its new common record format for Direct Loans and Pell Grants. The COD is designed to provide a consistent process—via the common record—for requesting, reporting, and reconciling Pell Grants and Direct Loans. Now, schools must enter, submit, and reconcile data separately for each program, data including name, address, and other pertinent information for the same student in different formats—a redundant process that can be quite time-consuming. XML is a meta-markup language that provides a format for describing structured data. XML is designed to enable the exchange of information (data) between different applications and data sources on the World Wide Web and has been standardized by the World Wide Web Consortium, an organization that develops common protocols to promote the evolution and interoperability of the Web. common record format and outgoing records back to the schools in their current record format. Thus, if SFA’s middleware approach is operationally successful, it will allow schools to use either method; those schools that do not use the new common record format could migrate to the common record on timetables that are more feasible for their individual circumstances. Figure 2 illustrates the first planned implementation of COD for Direct Loan and Pell Grant originations and disbursements using either the common record format or middleware. According to the Modernization Blueprint, COD will ultimately provide the 5,300 schools that participate in the title IV student financial aid programs with a single process for all aid origination and disbursement. This is expected to create a system that facilitates close to real-time sharing of data across all of SFA’s partners, and establish a platform that supports integrated technical and functional customer service for schools across all programs. SFA’s Modernization Blueprint also outlines key projects that are scheduled for implementation over the course of several years. Table 3 lists some of them. In adopting this approach to better integration and utilization of its existing data on student loans and grants, SFA may be able to address, at least in part, long-standing database integration problems. Such problems have contributed to slow and inconvenient loan servicing and management, as well as weak internal controls. SFA fully expects that this solution will provide improved customer service by permitting its eleven major systems to operate more cohesively in the near future and help reduce the total number of needed systems over the long term. Some of the problems SFA hopes to eliminate include improving the cumbersome process for gaining access to the various SFA system databases. This process sometimes requires users, such as an educational institution’s financial aid or accounting staff, to continually log in and out of different systems for related aid information on students for each program. These individuals must sometimes use a different school identifier and password to gain access to student information for each SFA program, and often do not have the ability to retrieve necessary information when they do gain access. As we noted in 1995, this internal control problem of not having access to current, accurate information sometimes led to loans and grants being improperly awarded. SFA expects that its middleware product will enable entities to gradually upgrade or migrate to new systems and databases while maintaining a consistent view for the user. That is, middleware can enable SFA to realize short-term, user-level integration, while enabling it to gradually improve its older systems over time. In short, by adopting a middleware-based strategy, SFA expects that it can continue operating some of its existing systems, applications, and databases, but in a more homogeneous fashion. Moreover, according to SFA’s chief operating officer, the alternative of developing a new, large, central database or student financial aid system was less suitable because of the cost and time involved in database redesign and data format conversion. Further, he expects middleware to be part of SFA’s long-term solution for integrating databases under its Modernization Blueprint and, through 2004, allow the eventual retirement of several existing systems. Finally, he expects this approach to allow SFA to be more responsive to customer needs. Figure 3 shows how the two alternative approaches differ in providing data to users. The experiences of other organizations have demonstrated that critical skill shortages must continually be addressed when using middleware as an integration solution. According to middleware users, the technology requires experienced, highly skilled programmers, with a broad knowledge of the entire environment in order to maintain seamless data exchanges. Industry officials cite the lack of sufficient numbers of programmers with the needed technical skills. According to IBM representatives, extensive technical training is needed before an experienced programmer can become effective using its middleware product. Banking officials confirm that finding people who are highly skilled in the use of this technology is difficult. For example, according to a senior official at a major bank, an experienced, certified middleware systems programmer can command over $100,000 annually, making retention of this type of talent challenging even for this bank in today’s competitive information technology marketplace. SFA management recognizes that it will face the same inherent human capital issues as these organizations and has tried to address them by leveraging experiences from the banking industry and by acquiring contracted expertise. In addressing the human capital skills issue associated with successful middleware implementation, SFA will count on the help of its modernization partner, who has substantial experience in implementing middleware solutions in the banking industry and the use of the middleware product’s vendor (IBM) as programmers. According to officials of another federal agency using MQSeries, when they originally tried to develop similar capabilities in-house, they were later forced to switch to the commercial product because of technical difficulties in maintaining the system on their own. A MQSeries users group also exists; other federal agencies using the MQSeries include the Customs Service, the Department of Veterans Affairs, and the Air Force, from which SFA may be able to borrow knowledge and technical expertise. As has been the case with several other organizations, a middleware integration strategy is likely a viable technology alternative for SFA in addressing its long-standing systems integration problems. SFA recognizes the human capital issues that middleware presents, and is preparing to meet them. While early, if implemented properly, middleware appears to be a reasonable approach that could result in improved user-level systems integration, while enabling SFA to gradually retire many of its remaining systems over time. In commenting on our draft report, the Deputy Secretary sought clarification on whether our analysis of SFA’s actions to use a middleware integration strategy addressed the full range of issues that we and the Education Inspector General had raised in past reports regarding SFA’s systems integration problems and rationale for SFA’s programs being included on our high-risk list. Specifically, he suggested that we clarify whether the middleware strategy adequately addressed our earlier concerns about SFA’s lack of an architecture, the costs associated with maintaining nine or more separate information systems, and the need for a long-term integrated SFA database. Additionally, the Deputy Secretary wanted us to clarify whether the new strategy introduced any new problems related to costs, increased risk of system breakdown, or introduction of errors into the current systems environment. While these are important issues, the focus of our review was to provide information on the use of the middleware technology and its viability as a means of integrating student financial aid information. As we note in the report, SFA’s middleware integration work is still in development and is moving into very early stages of implementation. Although preliminary testing and pilot efforts involving the middleware data integration capability have been positive, the actual versus expected benefits will not be known or measured until planned projects and activities become operational. We have concluded that SFA’s middleware strategy itself appears to be a reasonable technical approach for improving data integration. The Deputy Secretary also asked whether we took into account several previous reports in which we stated that the department needed a sound systems architecture before embarking on systems integration. We note that SFA has devised an enterprise-wide systems architecture in response to our 1997 recommendation and that SFA provided us with requisite technical documents that explained the guiding architecture on which it is building its middleware strategy. However, the scope of our review did not include an assessment of the adequacy of departmentwide architecture implementation and usage. Finally, the Deputy Secretary raised several technical questions related to the report’s graphics, terminology, and descriptions. We have clarified or modified these points where appropriate. Education’s written comments, along with our responses, are reproduced in appendix II. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We are sending copies of this report to the Secretary of Education, Education’s Office of Student Financial Assistance’s chief operating officer, the Director of the Office of Management and Budget, and appropriate congressional committees. Copies will also be available to other interested parties upon request. This report will be available on our Web site at www.gao.gov. If you or your offices have questions regarding this report, please call me at (202) 512-6257 or David B. Alston, Assistant Director, at (202) 512-6369. We can also be reached by e-mail at mcclured@gao.gov and alstond@gao.gov, respectively. Other individuals making key contributions to this report included Nabajyoti Barkakati, Michael P. Fruitman, and Glenn R. Nichols. Our objectives were to provide information on the use of middleware technology, and to evaluate the viability of SFA’s approach to using it to integrate student financial aid information. To achieve these objectives, we examined SFA documents, including the Modernization Blueprint and updates and information technology target architectures. We assessed how critical information technology integration issues are being addressed at SFA, including the merits and risks of the blueprint, and assessed agency documentation to determine whether SFA’s systems environment lends itself to a technically feasible middleware solution. In addition, we analyzed several technical documents on the general function and use of middleware, and interviewed officials from SFA and Accenture, its modernization partner. We also interviewed officials from the Advisory Committee on Student Financial Assistance to obtain their perspective on SFA’s use of middleware. Further, we spoke with officials from IBM, the developer of the middleware product (MQSeries) being implemented by SFA. We analyzed technical documents describing the operation of the MQSeries in general, as well as design documents addressing the implementation of this middleware product at SFA. To independently document the success of this middleware product in the public and private sectors, we consulted with users from the U.S. Customs Service, Bank of America, Chase Manhattan Bank, and First Union Bank. We analyzed documents relating to the implementation of this middleware product in two of these organizations. We performed this work at SFA headquarters and Accenture offices in Washington, D.C.; IBM’s office in McLean, Virginia, the U.S. Customs Service office in Springfield, Virginia; and Bank of America offices in Charlotte, North Carolina. We also conducted telephone interviews with officials from Chase Manhattan Bank and First Union Bank. Our work was performed from February through August 2001, in accordance with generally accepted government auditing standards. 1. We changed the report title to clarify that the scope of our review was focused on determining if SFA’s middleware systems integration strategy was a viable approach. 2. Given the scope of our review, we believe the report adequately addresses this concern. In drafting our report, we took into consideration previous GAO, Education’s Inspector General reports, and the department’s internal reports, particularly those relating to Education’s lack of a guiding enterprise architecture and the department’s pressing need to integrate its student financial aid systems and databases. For instance, we have already credited the department with defining a departmentwide systems architecture in response to our 1997 report on this topic. Further, in reviewing SFA’s middleware strategy, we confirmed that enterprise application integration architecture plans and documentation existed and was conducive to the IBM MQSeries middleware product line being used to facilitate data integration across SFA’s current computing platforms. However, the scope of our work did not permit us to assess the adequacy of departmentwide architecture implementation and usage issues. 3. The commitment indicated by the Secretary and SFA to resolve longstanding problems mentioned in our previous reports can go a long way towards providing the catalyst to solve many of Education’s data integrity problems. These problems have contributed to the inclusion of SFA’s programs on GAO’s high-risk list. However, neither the department’s or SFA’s efforts to address critical data quality and internal control issues related to its high-risk designation were included in the scope of this review. We do note, however, that SFA expects to reduce the number of total systems needed in the long-term in conjunction with its middleware implementation. 4. Past problems as well as rationale for better integrated program and financial data across SFA’s existing databases are explained in the background section of our report rather than in this brief opening paragraph. As such, we made no changes to the report. While middleware provides a means for better user integration, sound business practices and disciplined internal management controls will be needed for any organization to achieve mission improvements and financial benefits from its information systems investments. 5. Our discussion of integration in this context is based upon SFA’s most recently released Modernization Blueprint, which states that SFA’s task is to create: " . . . an integrated enterprise that meets our PBO goals of improved customer and employee satisfaction, and reduced unit costs. Part of that task is to modernize key systems and processes to create an enterprise that meets our customers needs. We can view some of these key processes and systems as major pieces of an overall integrated solution.” Given SFA’s description of the goals they wish to achieve through integration, we did not modify our report. 6. We agree with the descriptions of the additional shortcomings of NSLDS, but timeliness of updates remains a major issue. The objective of our review also was not to review and critique the problems of NSLDS as we did previously, rather, to focus on looking forward and assessing middleware as a suitable technology solution in the future for integrating SFA’s systems. Thus, we only provided one example of the negative consequences stemming from the lack of integration but provided numerous references to previous reports that describe these and other problems in greater detail. Accordingly, we did not modify our report. 7. SFA’s Chief Operating Officer (COO) clearly considers the use of middleware part of a long-term systems integration solution. Therefore, we did not modify our report. 8. We concur that the scope of our review was to determine whether middleware technology is viable and feasible in SFA’s system environment and that other issues were not covered in our work. Accordingly, we did not modify our report. 9. We updated our report to reflect that SFA now plans to have about 14 systems connected to middleware by December 2001. 10. We updated our report to clarify that some legacy systems, according to SFA’s Modernization Blueprint, will not have to be modified for middleware because some will be retired in the future. 11. We modified our report to reflect that the first planned use of middleware for Direct Loan and Pell Grant originations and disbursement would occur next year. 12. We updated our report to reflect the change in the implementation date. 13. We did not modify our report. Education’s Central Automated Processing System (EDCAPS) included in figure 1 and table 1 in our report is not the same as the Central Data System (CDS), which has been retired. EDCAPS is the primary accounting system for the department. The department’s Management Improvement Team Accomplishments document, dated October 30, 2001, describes EDCAPS as Education’s “financial records and accounting system.” CDS is not discussed in our report. 14. We modified our report to reflect that the PBO was established in 1998. 15. We modified our report to clarify responsibilities of SFA under the PBO legislation. 16. We modified our report to indicate SFA’s participation. 17. We believe the appointment of SFA’s COO was a relevant milestone. Several sections of the HEA Amendments of 1998 creating the PBO address the functions of the COO, including the requirement to have a PBO performance plan. Likewise, the selection of the SFA modernization partner also was a relevant milestone, especially in light of the important role that the partner plays in SFA’s systems modernization, which is described in the report. Therefore, we did not modify our report. 18. We modified our report to clarify that there are additional users of the MQSeries product. 19. We did not modify our report. The figure displays only the COD process, which will initially include only Direct Loan and Pell Grant origination and disbursements. All Direct Loan and Pell Grant funds are federal and ultimately come from the U.S. Treasury. We purposely omitted the Federal Reserve and other intermediary systems for simplicity. Schools will follow the COD process described in the figure when originating loan and grant applications on behalf of students. 20. We modified our report to better reflect that middleware will also convert disbursement records back to each school’s current record format. 21. According to SFA’s most recently released Modernization Blueprint (page 13), the COD is expected to be able to provide all schools that participate in title IV financial aid programs with a single process for aid origination and disbursement. SFA staff confirmed the accuracy of this statement. Therefore, we did not modify our report. 22. According to SFA’s Modernization Blueprint, COD is expected to be capable of handling all aid distribution. The first use will be for Direct Loan and Pell Grant origination and distribution. Therefore, we did not modify our report. 23. We did not modify our report. The internal control problems identified in the 1995 report focused on the need to have timely, accurate student eligibility data. Also, see comment 6. 24. As noted in the report, we are attributing the choice between two options -- developing a large central database or maintaining several integrated databases using middleware -- to SFA’s COO. By 2004, SFA does expect to retire several existing systems that should result in fewer databases than currently exist. 25. We modified our report to note that SFA plans to retire systems through 2004. 26. We did not assess the capabilities of Accenture, as this was not included in the scope of our work. Therefore, we did not modify our report. 27. As noted, we point out that SFA is attempting to address its human capital challenges associated with the use of the new middleware technology by leveraging the experiences from the banking industry as well as acquiring recognized contractor expertise. These are prudent steps, but the adequacy of specific measures being taken both by SFA management and its modernization partner in addressing workforce management and planning needs go beyond the scope of this review; therefore, we did not modify the report. 28. We modified the report to delete any reference to the availability of 24- hour customer support. Financial Management: Internal Control Weaknesses Leave Department of Education Vulnerable to Improper Payments (GAO-01-585T, April 3, 2001). High-Risk Series: An Update (GAO-01-263, January 2001). High-Risk Series: An Update (GAO/HR-99-1, January 1999). Student Financial Aid Information: Systems Architecture Needed to Improve Programs’ Efficiency (GAO/AIMD-97-122, July 29, 1997). High-Risk Program: Information on Selected High-Risk Areas (GAO/HR- 97-30, May 1997). Department of Education: Multiple, Nonintegrated Systems Hamper Management of Student Financial Aid Programs (GAO/T-HEHS/AIMD-97- 132, May 15, 1997). High Risk Series: Student Financial Aid (GAO/HR-97-11, February 1997). Reporting of Student Loan Enrollment Status (GAO/HEHS-97-44R, February 6, 1997). Department of Education: Status of Actions to Improve the Management of Student Financial Aid (GAO/HEHS-96-143, July 12, 1996). Student Financial Aid: Data Not Fully Utilized to Identify Inappropriately Awarded Loans and Grants (GAO/T-HEHS-95-199, July 12, 1995). Student Financial Aid: Data Not Fully Utilized to Identify Inappropriately Awarded Loans and Grants (GAO/HEHS-95-89, July 11, 1995). Federal Family Education Loan Information System: Weak ComputerControls Increase Risk of Unauthorized Access to Sensitive Data (GAO/AIMD-95-117, June 12, 1995). High-Risk Series: Student Financial Aid (GAO/HR-95-10, February 1995). Financial Audit: Federal Family Education Loan Program’s Financial Statements for Fiscal Years 1993 and 1992 (GAO/AIMD-94-131, June 30, 1994). Financial Management: Education’s Student Loan Program Controls Over Lenders Need Improvement (GAO/AIMD-93-33, September 9, 1993). Financial Audit: Guaranteed Student Loan Program’s Internal Controls and Structure Need Improvement (GAO/AFMD-93-20, March 16, 1993). Department of Education: Management Commitment Needed to Improve Information Resources Management (GAO/IMTEC-92-17, April 20, 1992). The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 (automated answering system). | Although the Department of Education spent millions of dollars to modernize and integrate its nonintegrated financial aid systems during the past 10 years, these efforts have met with limited success. Recently, Education's Office of Student Financial Assistance (SFA) began using a software approach known as middleware to provide users with a more complete and integrated view of information in its many databases. In selecting middleware, SFA has adopted a viable, industry-accepted means for integrating and utilizing its existing data on student loans and grants. To meet its human capital needs, SFA has solicited the help of a private sector "modernization partner" with experience in implementing and managing middleware solutions--particularly in the financial industry--and has also chosen to use a leading middleware software product. |
After about 30 years of relatively steady growth, USPS’s expenses began consistently exceeding revenues in fiscal year 2007 (see fig. 1). As a result, USPS has lost a total of $56.8 billion since fiscal year 2007. The continued deterioration in USPS’s financial condition is due primarily to two factors. 1. Declining mail volumes: USPS continues to face decreases in mail volume, its primary revenue source, as online communication and e-commerce expand. While remaining USPS’s most profitable product, First-Class Mail volume in particular has significantly declined in recent years. For example, while total mail volume declined 27 percent from its peak in fiscal year 2006 (including a 1 percent decline in fiscal year 2015), First-Class Mail volume has declined to a greater extent—40 percent since its peak in fiscal year 2001 (with a 2 percent decline in fiscal year 2015). USPS reported that the most significant factor contributing to the decline in First-Class Mail volume is the continued migration toward electronic communication and transaction alternatives—a migration USPS expects to continue for the foreseeable future. USPS added that the decline in First-Class Mail was exacerbated by the Great Recession that the National Bureau of Economic Research reported as lasting from December 2007 to June 2009. In the long run, USPS faces the risk of increasing diversion of mail to electronic alternatives and the possibility of future economic downturns that could negatively affect mail volumes. USPS has reported that although increased shipping and package volume has offset some of the declines in mail volume, this volume has a smaller profit margin than First-Class Mail. USPS will need to be efficient in its processing and delivery of packages to capitalize on growth in that market. 2. Growing Expenses: While mail volume has declined, USPS’s operating expenses have been rising. USPS reported that its key operating expenses grew in fiscal year 2015—notably salary increases for unionized employees, as well as additional work hours, in part due to a 14.1 percent growth in shipping and packages, which are more labor intensive to process. Despite efficiency initiatives such as consolidation of 36 mail-processing facilities in 2015, total employee work hours increased, and the size of USPS’s career workforce increased slightly in fiscal year 2015—the first increase in the size of the career workforce since fiscal year 1999. Compensation and benefits comprise close to 80 percent of total USPS expenses. Thus, expenses will further grow if increases in salaries and work hours continue. According to USPS, increases in compensation and benefits costs (primarily from increased wages) will add $1.1 billion in additional costs in fiscal year 2016. As previously discussed, USPS’s unfunded liabilities and debt have become a large and growing financial burden, increasing from 99 percent of USPS revenues at the end of fiscal year 2007 to 182 percent of revenues at the end of fiscal year 2015 (see table 2 in app. I for more detail). At the end of fiscal year 2015, USPS’s $125 billion in unfunded liabilities and outstanding debt represented a $7.4 billion increase from the previous year. In addition, reduced mail volumes and growing expenses have contributed to USPS’s inability to fully meet its requirement to prefund retiree health benefits. The Postal Accountability and Enhancement Act (PAEA) established the Postal Service Retiree Health Benefits Fund and required USPS to begin prefunding health benefits for its current and future postal retirees, with annual payments of $5.4 billion to $5.8 billion from fiscal years 2007 through 2016, followed by actuarially determined prefunding payments beginning in 2017 and every year thereafter. As of the end of fiscal year 2015, USPS’s liability for retiree health benefits was about $105.2 billion and the Postal Service Retiree Health Benefits Fund balance was $50.3 billion, with a resulting unfunded liability of $54.8 billion. USPS has not made a prefunding payment since fiscal year 2011, with a total of $28.1 billion in missed payments. These missed payments represent about half of USPS’s total losses since fiscal year 2007. Even without the annual prefunding requirement, however, USPS would have still lost $10.8 billion during this time period. USPS has stated that it expects to miss its required prefunding payment of $5.8 billion due at the end of fiscal year 2016. USPS will remain unlikely to fully make its required retiree health and pension payments in the near future. Beginning in fiscal year 2017, USPS’s payments will be restructured as it will no longer be required to make fixed prefunding payments, but will be required to start making annual payments based on actuarial determinations of the following component costs: a 40-year amortization schedule to address the unfunded liabilities for postal retiree health benefits, the “normal costs” of retiree health benefits for current employees, and a 27-year amortization schedule to address the unfunded liabilities for postal pension benefits under the Civil Service Retirement System (CSRS). These payments are in addition to annual payments USPS is already required to make to finance its pension benefits under the Federal Employees Retirement System (FERS), which consists of a 30-year amortization schedule to address any unfunded liabilities, and the normal costs of FERS benefits for current employees. USPS will find it very difficult to make all of these required payments given its financial condition and outlook. As table 1 below shows, in fiscal year 2017, USPS will be required to make an estimated total of $11.3 billion in payments for retiree health and pension benefits under CSRS and FERS—about $4.6 billion more than what USPS paid in fiscal year 2015 for these benefit programs. In addition to declining mail volumes and increased expenses, USPS’s ability to make its required payments for these retirement programs will be further challenged due to: Expiration of a temporary rate surcharge: USPS reported that it generated $2.1 billion in additional revenue during fiscal year 2015 and $1.4 billion in additional revenue in fiscal year 2014 as a result of a 4.3 percent “exigent” surcharge that began in January 2014. However, USPS expects this surcharge to be discontinued around April 2016, when the surcharge is expected to have contributed $4.6 billion in total additional revenue. USPS expects its additional revenue from the exigent surcharge to be about $1.1 billion in fiscal year 2016, with no additional revenue in fiscal year 2017, as the surcharge will have expired. USPS recently reported that the expiration of the exigent surcharge will have an adverse impact on its future operating revenue and liquidity, and that its actions to increase efficiency, reduce costs, and generate additional revenue may be insufficient to meet all of its financial obligations or to carry out its strategy. No new major cost-savings initiatives planned. USPS has no current plans to initiate new major initiatives to achieve cost savings in its operations. USPS officials recently told us that it is not yet known whether USPS will have sufficient financial resources to make all or a portion of its legally- required payments for retiree health and pension benefits at the time that they become due. USPS further reported that without structural change to its business model, absent legislative change, it expects continuing losses and liquidity challenges for the foreseeable future. Large unfunded liabilities for postal retiree health and pension benefits— which were $78.9 billion at the end of fiscal year 2015—may ultimately place taxpayers, USPS employees, retirees and their beneficiaries, and USPS itself at risk. As we have previously reported, funded benefits protect the future viability of an enterprise such as USPS by not saddling it with bills later after employees have retired. Further, since USPS retirees participate in the same health and pension benefit programs as other federal retirees, if USPS ultimately does not adequately fund these benefits and if Congress wants these benefits to be maintained at current levels, funding from the U.S. Treasury, and hence the taxpayer, would be needed to continue the benefit levels. Alternatively, unfunded benefits could lead to pressure for reductions in benefits or in pay. Thus, the timely funding of benefits protects USPS employees, retirees, beneficiaries, taxpayers, and the USPS enterprise. USPS’s financial situation leaves Congress with difficult choices and trade-offs to achieve the broad-based restructuring that will be necessary for USPS to become financially sustainable. USPS’s ability to make its required retiree health and pension payments requires a decrease in expenses or increase in revenues, or both. As we have previously reported, USPS’s actions alone under its existing authority will be insufficient to achieve sustainable financial viability; comprehensive legislation will be needed. Congressional decisions about how to address the following issues will shape USPS’s future role, services, operations, networks, and ability to adapt to changes in mail volume. In making these decisions, Congress could consider, among other things, the following factors. The level of postal services and the affordability of those services: USPS’s growing financial difficulties combined with vast changes in how people communicate provide Congress with an opportunity to consider what postal services will be needed in the 21st century. Specifically, Congress could consider what postal services should be provided on a universal basis to meet customer needs and how these services should be provided. Congress also could consider trade-offs in reducing the level of postal services, such as providing USPS with the authority to reduce the frequency of letter mail delivery, to enable USPS to reduce its expenses. A key factor in any consideration to reduce postal services would include potential effects on postal customers, mail volumes, and employees. In particular, Congress could consider the quality of postal service—such as the frequency and speed of mail delivery and the accessibility and scope of retail postal services—in considering any service reduction. In January 2015, for example, USPS revised its standards for on-time mail delivery by increasing the number of days for some mail to be delivered and still be considered on time. However, under the revised delivery standards, the percentage of mail delivered on time declined for many types of mail, such as First-Class Mail and Periodicals. USPS attributed declines in delivery performance to operational changes it implemented in January 2015 coupled with adverse winter weather. Compensation and benefits in an environment of revenue pressures: Key compensation and benefits costs have increased and continue to increase for USPS employees, while demand for USPS’s main revenue source, mail and First-Class Mail in particular, has declined and continues to decline. Further, the exigent rate increase mentioned above is expected to expire in April 2016. To put USPS’s situation into context, many private sector companies (such as automobile companies, airlines, mail preparation and printing companies, and major newspapers) took far-reaching measures to cut costs (such as reducing or stabilizing workforce, salaries, and benefits) when the demand of their central product and services declined. However, although USPS also has taken a range of cost-cutting measures, USPS has stated that its strategies to increase efficiency and reduce costs by adjusting its network, infrastructure, and workforce and to retain and grow revenue are currently constrained by statutory, contractual, regulatory, and political restrictions. For example, USPS does not administer its employees’ pension, health, and workers’ compensation benefits programs, and postal rates are regulated by the Postal Regulatory Commission, with rate increases for most mail limited by an inflation-based price cap. Most USPS employees are covered by collective bargaining agreements with four major labor unions which have established salary increases, cost-of-living adjustments, and the share of health insurance premiums paid by employees and USPS. When USPS and its unions are unable to agree, the parties are required to enter into binding arbitration by a third-party panel. There is no statutory requirement for USPS’s financial condition to be considered in arbitration. Considering USPS’s poor and deteriorating financial condition and the competitive environment, we continue to believe—as we reported in 2010—that Congress should consider revising the statutory framework for collective bargaining to ensure that USPS’s financial condition be considered in binding arbitration. USPS’s dual role of providing affordable universal service while remaining self-financing: As an independent establishment of the executive branch, USPS has long been expected to provide affordable, quality, universal delivery service to all parts of the country while remaining self-financing. USPS and other stakeholders have considered a range of different business models to address USPS’s financial difficulties. For example, USPS’s 2002 Transformation Plan included a range of alternatives from a publicly-supported model to a business model with a corporate structure supported by shareholders. An alternative business model, if any, would need to address the level of any costs that would be transferred from USPS, which is financed by postal ratepayers, to the federal government, which is funded by taxpayers. In addition, if Congress requires eligible postal retirees to participate in Medicare, as USPS has previously proposed, it should consider the tradeoffs for the federal budget deficit and Medicare’s financial condition, as well as the implications for affected employees. Finally, a fully functioning USPS Board of Governors is needed to support USPS’s ability to carry out its critical responsibilities. USPS’s 11-seat Board of Governors is required by law to have a quorum of six members in order to take certain actions. Because two Governors left the Board in December 2015 due to term limits, the Board currently consists of only one Governor (who will not be able to serve past December 2016), the Postmaster General, and the Deputy Postmaster General. Certain powers are reserved to the Governors. USPS has reported that although the inability of the Board to constitute a quorum does not inhibit or affect the authority of the Governors in office from exercising those powers, it is not apparent how those powers could be exercised if there were no Governors. According to USPS, the critical responsibilities reserved to the Governors are setting postal prices, approving new products, and appointing or removing the Postmaster General and the Deputy Postmaster General. USPS has stated that in the event no Governors are in place, these critical duties may not be able to be executed, potentially leaving USPS without the ability to adjust its prices as needed, introduce new products, or appoint or replace its two most senior executive officers. In conclusion, USPS management, unions, the public, community leaders, and Members of Congress need to take a hard look at what level of postal services residents and businesses need and can afford. The status quo is not sustainable. Chairman Johnson, Ranking Member Carper, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information about this statement, please contact Lori Rectanus, Director, Physical Infrastructure Issues, at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. In addition to the contact named above, Frank Todisco, Chief Actuary, FSA, MAAA, EA, Applied Research and Methods; Teresa Anderson; Samer Abbas; Kenneth John; Faye Morrison; and Crystal Wesco made important contributions to this statement. Mr. Todisco meets the qualification standards of the American Academy of Actuaries to render the actuarial opinions contained in this testimony. Selected USPS liabilities (Included and Pension Funds (Not fully on USPS balance sheet) included in USPS Balance Sheet) Funded status for retiree health benefits (unfunded) Funded status for CSRS (unfunded) Funded status for FERS (unfunded) Unfunded obligations, liabilities, and debt as percentage of revenue (12.7) (55.0) (3.1) (12.5) (53.5) (9.0) (13.2) (52.0) (7.3) (13.6) (48.6) (14.2) (46.2) (17.8) (13.7) (47.9) (18.8) (12.5) (48.3) (17.8) (0.1) (12.5) (48.9) (19.4) (3.6) (12.5) (54.8) (20.4) (3.7) Total USPS liabilities, debt, and unfunded obligations (74.3) (83.7) (85.9) (74.3) (103.7) (112.1) (110.9) (117.8) (125.2) Unfunded obligations, liabilities, and debt are the sum of the unfunded actuarial liabilities (obligations), USPS liabilities, and debt shown in this table. Total USPS revenue consists of total USPS operating revenue plus interest and investment income for each fiscal year. Total assets consist of current assets including cash and noncurrent assets largely comprising property and equipment measured at historic purchase value after depreciation. This does not include assets funding the retiree health and pension benefits. Net CSRS funded Status (unfunded) Net FERS funded status (unfunded) Total pension funded status (unfunded) (3.1) (9.0) (7.3) (17.8) (18.8) (17.8) (0.1) (19.4) (3.6) (20.4) (3.7) 5.3 (2.5) (0.4) 12.5 (15.2) (17.9) (17.9) (23.0) (24.1) (52.0) (48.6) (46.2) (47.9) (48.3) (48.9) (54.8) 5.8 33.9 Pension and retiree health benefit funded status (49.7) (56.0) (52.4) (36.1) (61.4) (65.8) (66.2) (71.9) (78.9) This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | USPS is a critical part of the nation's communication and commerce, delivering 154 billion pieces of mail in fiscal year 2015 to 155 million delivery points. However, USPS's mission of providing prompt, reliable and efficient universal services to the public at risk due to its poor financial condition. USPS's net loss was $5.1 billion in fiscal year 2015, which was its ninth consecutive year of net losses. At the end of fiscal year 2015, USPS had $125 billion in unfunded liabilities, mostly for retiree health and pensions, and debt—an amount equal to 182 percent of USPS's revenues. In July 2009, GAO added USPS's financial condition to its list of high-risk areas needing attention by Congress and the executive branch. USPS's financial condition remains on GAO's high-risk list. In previous reports, GAO has included strategies and options for USPS to generate revenue, reduce costs, increase the efficiency of its delivery operations, and restructure the funding of USPS pension and retiree health benefits. GAO has also previously reported that Congress and USPS need to reach agreement on a comprehensive package of actions to improve USPS's financial viability. This testimony discusses (1) factors affecting USPS's deteriorating financial condition, (2) USPS's ability to make required retiree health and pension payments, and (3) choices Congress faces to address USPS's financial challenges. This testimony is based primarily on GAO's work over the past 5 years that examined USPS's financial condition—including its liabilities—and updated USPS financial information for fiscal year 2015. The U.S. Postal Service's (USPS) financial condition continues to deteriorate as a result of trends including: Declining mail volume : First-Class Mail—USPS's most profitable product—continues to decline in volume as communications and payments migrate to electronic alternatives. USPS expects this decline to continue for the foreseeable future. Growing expenses: Key USPS expenses continue to grow, such as salary increases and work hours due in part to growth in shipping and packages, which are more labor-intensive. Compensation and benefits comprise close to 80 percent of USPS's expenses. USPS's financial condition makes it unlikely it will be able to fully make its required retiree health and pension payments in the near future. In fiscal year 2015, USPS was required to make $12.6 billion in retiree health and pension payments, but it only made $6.7 billion in payments as it did not make a required retiree health payment of $5.7 billion. USPS's required payments will be restructured in fiscal year 2017, with estimated payments totaling $11.3 billion—$4.6 billion more than what USPS paid in fiscal year 2015. USPS's ability to make these required payments will be further challenged due to: Expiration of a temporary rate surcharge: This surcharge on most postal rates effective January 2014, which has generated $3.6 billion in additional annual revenues through September 2015, is expected to expire April 2016. No new major cost savings initiatives are planned. Large unfunded liabilities for postal retiree health and pension benefits—which were $78.9 billion at the end of fiscal year 2015—may ultimately place taxpayers, USPS employees, retirees, and their beneficiaries, and USPS itself at risk. As we have previously reported, funded benefits protect the future viability of an enterprise such as USPS by not saddling it with bills later after employees have retired. Further, since USPS retirees participate in the same health and pension benefit programs as other federal retirees, if USPS ultimately does not adequately fund these benefits, and if Congress wants these benefits to be maintained at current levels, funding from the U.S. Treasury, and hence the taxpayer, would be needed to continue the benefit levels. Alternatively, unfunded benefits could lead to pressure for reductions in benefits or in pay. Congress faces difficult choices and tradeoffs to address USPS's financial challenges. The status quo is not sustainable. Considerations for Congress include the (1) level of postal services provided to the public and the affordability of those services, (2) compensation and benefits for USPS employees and retirees in an environment of revenue pressures, and (3) tension between USPS's dual roles as an independent establishment of the executive branch required to provide universal delivery service and as a self-financing entity operating in a businesslike manner. |
During the 1990s, the primary means for residential users to access the Internet was a dial-up connection, in which a standard telephone line is used to make an Internet connection at data transmission speeds of up to 56 kilobits per second (kbps). Broadband access to the Internet became available to residential customers by the late 1990s. Broadband connections offer a higher speed Internet connection than dial-up. For example, some broadband connections in the United States offered over telephone lines can provide speeds exceeding 1 million bits per second (Mbps) both upstream (data transferred from the consumer to the Internet service provider, also known as upload) and downstream (data transferred from the Internet service provider to the consumer, also known as download). These higher speeds enable consumers to receive information much faster and thus access certain applications and content that might be inaccessible with a dial-up connection. Also, broadband typically provides an “always on” connection to the Internet, so users do not need to establish a connection to the Internet service provider each time they want to go online. The higher transmission speeds that broadband offers generally cost more than dial-up, and some broadband users pay a premium to obtain very-high-speed service. Consumers can receive a broadband connection to the Internet through a variety of technologies, including, but not limited to, the following: Cable modem. Cable television companies first began providing broadband cable modem service in the late 1990s. This service, which is primarily available in residential areas, enables cable operators to deliver broadband service through the same coaxial cables that deliver pictures and sound to television sets. Although the speed of service varies with many factors, download speeds of up to 6 Mbps are typical. Some cable providers are offering even higher download speeds, up to 100 Mbps. Digital subscriber line (DSL). Local telephone companies provide DSL service, another form of broadband service, over their telephone networks on spectrum unused by traditional voice service. To provide DSL service, telephone companies must install equipment in their facilities as well as installing or providing DSL modems and other equipment at customers’ premises; they may also have to remove devices on phone lines that may cause interference. Most residential customers receive older, asymmetric DSL (ADSL) service with download speeds of 1.5 Mbps to 3 Mbps. ADSL technology can achieve speeds of up to 8 Mbps over short distances. Newer DSL technologies can support services with speeds of over 8 Mbps up to 50 Mbps in some areas. Satellite. Satellites transmit data to and from subscribers from a fixed position above the equator, eliminating the need for a telephone or cable connection. Typically, a consumer can expect to download data at a speed of about 1 Mbps and upload data at a speed of about 200 kbps. Transmission of data via satellite results in a slight lag in transmission, typically one-half to three-fourths of a second, thus rendering this service less suitable for certain Internet applications, such as videoconferencing. While satellite broadcast service may be available throughout the country, its use requires a clear line of sight between the customer’s antenna and the southern sky. The equipment necessary for service, the recurring monthly fees, and the installation costs are generally higher for satellite broadband service than for most other broadband transmission modes. Wireless. Land-based, or terrestrial, wireless broadband service connects a home or business to the Internet using a radio link. Some companies are offering fixed wireless broadband service throughout cities. Also, mobile telephone carriers have begun offering broadband mobile wireless Internet service, allowing subscribers to access the Internet with their mobile phones or laptops in areas throughout cities where their provider supports the service. Also, wireless fidelity (Wi-Fi) networks—which provide broadband service in so-called “hot spots,” or areas within a radius of up to 300 feet—can be found in cafes, hotels, airports, and offices. Hot spots generally use a short-range technology that provides speeds up to 54 Mbps. In addition, Fourth Generation, i.e., 4G, wireless technology, now in the early stages of deployment, is expected to achieve broadband speeds as fast as 50 to 100 Mbps for a few users over an extended period of time or for short periods of time for many users. Some 4G technologies, such as Worldwide Interoperability for Microwave Access (known as WiMAX), can provide broadband service up to approximately 30 miles, but at that distance, data transmission rates would be low. Fiber optic. Fiber optic technology converts electrical signals carrying data to light and sends the light through transparent glass fibers about the diameter of a human hair. In countries such as Japan and Korea, the government is encouraging providers to offer, in the next 3-5 years, data transmission speeds exceeding current DSL or cable modem speeds, typically by tens or even hundreds of megabits per second, up to 1 gigabit per second (Gbps) in some areas. Fiber may be provided in several ways, including direct connection to a customer’s home or business, or to a location somewhere between the provider’s facilities and the customer. In the latter case, the last part of the connection to the customer’s premises may be provided over coaxial cable, copper loop, or radio technology. Such hybrid arrangements may be less costly than providing fiber all the way to the customer’s premises, but they generally cannot achieve the high transmission speed of a full fiber-to-the-premises connection. In the United States, FCC is the federal agency principally responsible for broadband but the scope of its authority has not been resolved. In a series of decisions starting in 2002, FCC classified broadband Internet services as “information services” under the Communications Act. “Information services” are not subject to Title II of the Communications Act, which addresses telecommunications services like phone service, but gives FCC authority to regulate these services. However, FCC asserted that it had authority to regulate broadband Internet service using its “ancillary authority” under Title I of the Communications Act. A recent decision of the U.S. Court of Appeals for the District of Columbia Circuit called that authority into question. In this case, Comcast Corp. v. FCC, the court reviewed an FCC decision that relied on ancillary authority to address an Internet service provider’s network management practices. The court held that the use of ancillary authority must be tied to a specific statutory mandate in the Communications Act and that FCC had not done that in its order. Since that time, FCC has released a Notice of Inquiry (NOI) to seek public comment on its legal framework for regulating broadband Internet services. The NOI suggests that there are at least three legal options for FCC as follows: 1. Maintain the current “information service” framework for broadband Internet service based on the Title I ancillary authority questioned in Comcast. 2. Identify the connectivity portion of broadband Internet service as a “telecommunications service” to which all requirements of Title II of the Communications Act would apply. 3. Following the framework Congress established for cell phone services in 1993, identify the connectivity portion of broadband Internet service as a telecommunications service and simultaneously forbear from applying all but the minimum number of provisions of Title II needed to implement fundamental universal service, competition and market entry, and consumer protections. The NOI seeks comment on the three legal options and any other approaches that will restore a solid legal foundation for FCC’s broadband policies. Public comments were due on July 15, 2010, and replies on August 12, 2010. Three other federal agencies also have responsibilities for broadband in the United States: The Office of Science and Technology Policy (OSTP) within the Executive Office of the President has a broad mandate to advise the President and the federal government on the effects of science and technology on domestic and international affairs and has led interagency efforts to develop science and technology policies and budgets. Within the Department of Commerce, the National Telecommunications and Information Administration (NTIA) serves as the President’s principal telecommunications and information adviser and works with other executive branch agencies to develop the administration’s telecommunications policies. Within the Department of Agriculture, the Rural Utilities Service (RUS) provides financial resources for broadband deployment. Under the American Recovery and Reinvestment Act of 2009 (Recovery Act), enacted on February 17, 2009, NTIA and RUS have responsibility for distributing federal moneys to expand broadband. The act provided $7.2 billion to extend access to broadband throughout the United States, including $4.7 billion for NTIA and $2.5 billion for RUS. Specifically, the Recovery Act authorized NTIA, in consultation with FCC, to create the Broadband Technology Opportunities Program (BTOP) to manage competitive grants to a variety of entities for broadband infrastructure, public computer centers, and innovative projects to stimulate demand for, and adoption of, broadband. The Recovery Act made up to $350 million of the $4.7 billion available for developing and maintaining a nationwide map featuring the availability of broadband service, as provided in the Broadband Data Improvement Act. In addition, the Recovery Act made some of NTIA’s appropriation available for transfer to FCC for the development of a national broadband plan to help ensure that all people in the United States have access to broadband. The Recovery Act also authorized RUS to establish the Broadband Initiatives Program (BIP) to make loans and to award grants and loan-grant combinations for broadband infrastructure projects in rural areas. Pursuant to the Recovery Act, all BTOP and BIP funds must be awarded by September 30, 2010. In May 2009, we reported on the broadband deployment policy of the past administration, the principal federal programs that helped fund broadband infrastructure, and stakeholders’ views on the usefulness of those programs. We also compared the policies of some OECD countries, which had higher broadband adoption rates than the United States, and recommended that those agencies responsible for overseeing federal efforts to increase broadband deployment and adoption—FCC, NTIA, and RUS—work together to specify performance goals and measures for broadband deployment and coordinate their efforts in carrying out the plan. In 27 of the 30 OECD countries, including the United States, broadband has been deployed to 90 percent or more of households regardless of demographic or geographic differences. High rates of broadband deployment have been achieved despite geographic and financial differences among the OECD countries. However, not all OECD countries have overcome the same challenges in deploying broadband infrastructure. For example, in Denmark, which is one of the smallest and most densely populated OECD countries, with an average of 128 people per square kilometer, broadband has been deployed to 99 percent of households. Yet in the United States, which is 228 times larger geographically, and 56 times more populous, and which has an average of 32 people per square kilometer, broadband has been deployed to more than 95 percent of households. See figures 1 and 2. Across the 30 OECD countries, average broadband download speeds range from 1.352 Mbps in Mexico to 11.717 Mbps in South Korea, and the majority of countries have average broadband speeds of 3 Mbps to 8 Mbps, according to Akamai Technologies, a global Internet content provider that issues reports assessing broadband download speeds in approximately 71 countries. The United States, with assessed average speeds of 3.808 Mbps, ranks 14th among the OECD countries. However, broadband speeds can exceed averages under certain conditions. For example, in the United States, three localities—Berkeley, California (18.730 Mbps); Chapel Hill, North Carolina (17.483 Mbps); and Stanford, California (16.956 Mbps)— offer the highest average broadband speeds in the world. In addition, 21 of the 100 top cities Akamai evaluated are in the United States. The quality of broadband infrastructure is often characterized by the speed it is capable of providing to users. Greater broadband speeds enable the use of more services over the Internet. For example, the United States and Japan lead the world in demand for high-definition television (HDTV), which can consume up to 18 Mbps if broadcast over the Internet. Current Internet-based video requires 1-4 Mbps; if these speeds grow over time and demand for Internet-based HDTV is combined with demand for other broadband-based services, such as Web browsing and online gaming, a household’s demand for broadband speeds could exceed 20 Mbps or more. See table 1. However, since most HDTV today is carried on dedicated infrastructure, the impact on demand for Internet broadband speeds is small. A number of demographic factors, such as population, cost, and computer ownership, affect broadband adoption rates. Seventeen OECD countries have broadband adoption rates that exceed the average of 23.3 subscriber lines per 100 inhabitants, including the United States, at 26.4 subscriber lines. Furthermore, the United States has more subscribers than any other OECD country—81 million, or more than twice as many as Japan, which has 31 million, the second highest number of subscribers. Population is an important factor to consider when analyzing broadband adoption rates. For example, 7 of the 10 countries with the highest adoption rates are also among the 10 countries with the smallest populations. Because the population of the United States is significantly larger than that of the other OECD countries, a 1-unit increase in the broadband adoption rate in the United States requires more than 3 million new U.S. broadband subscriber lines. By contrast, the Netherlands would need another 160,000 subscriber lines to achieve a 1-unit increase. Assuming all other factors are equal, the cost of a 1-unit increase in broadband subscriber lines per 100 inhabitants is considerably higher in the United States than in any other country. See figures 3 and 4. Cost—including a monthly broadband subscription price and the price of a computer or other device for accessing the Internet—is another key factor affecting broadband adoption rates. A 2009 survey conducted for FCC determined that 65 percent of American adults use broadband at home, although another 12 percent use the Internet, either through a dial-up connection or at a place other than their home. However, this study also surveyed nonadopters in the United States and determined that more than one-third identified cost as the main factor affecting their decision not to subscribe to broadband service. Additionally, subscription costs generally increase with speed, making higher-speed services typically more challenging for users to adopt. As a result, cost concerns may limit the level of Internet-based applications consumers can access. Broadband service prices can be assessed along several criteria, such as the average price per megabit or the price for a given “speed tier.” As table 2 shows, prices for broadband service in the United States are below OECD averages except for very-high-speed service. In the United States, several demographic factors, including education and income, are also thought to affect broadband adoption. For example, having a child in school and having a higher income are associated with higher broadband adoption levels. According to FCC’s 2009 survey, 75 percent of parents with a minor child have broadband services at home, as do 91 percent of households with annual incomes of more than $75,000. Conversely, 40 percent of households with annual incomes of less than $20,000 have broadband services at home. See table 3. Personal computer ownership has also been linked to broadband adoption, since computers enable users to access Internet-based services. Of the 30 OECD countries, the United States ranks fifth in personal computer ownership, with 80.6 per 100 inhabitants—a rate considerably above the average of 52.3 per 100 inhabitants. Yet despite this high personal computer ownership rate, FCC’s 2010 survey indicates that 10 percent of U.S. individual nonadopters surveyed cited the cost of computer ownership as one of the main reasons for nonadoption. According to our analysis of OECD and World Bank data, income is a factor that drives broadband adoption across the OECD countries. For example, Turkey, which has the lowest adoption rate (9.0 subscribers per 100 inhabitants), also has the lowest gross national income (GNI) per capita ($9,020) of the OECD countries, while the United States, which ranks 15th in adoption (with 26.4 subscribers per 100 inhabitants), ranks eighth in GNI per capita ($47,930). Norway, which ranks third in adoption (with 33.9 subscribers per 100 inhabitants), has the highest GNI per capita ($87,340). As figure 5 shows, broadband adoption generally declines as income declines, although outliers do exist. The seven countries we selected as case studies, all of which had achieved higher levels of either broadband deployment or broadband adoption than the United States as of the fourth quarter of 2009, have taken similar actions to increase deployment and adoption—actions that stakeholders in these countries told us they considered effective. Through our case studies, we identified five overall categories of actions: (1) establish plans and policies to guide deployment and provide leadership support, (2) provide government funding through public/private partnerships, (3) promote competition, (4) implement strategies to make broadband services more available and useful to consumers, and (5) provide digital literacy training and consumer subsidies. All seven selected countries have instituted broadband plans. Generally, these plans include some mix of short- and long-term goals, action plans, and performance metrics. Such attributes align with the framework set forth by the Government Performance and Results Act of 1993 (GPRA), which stresses the importance of having clearly stated objectives, performance plans, goals, and measures to improve a program’s effectiveness. Some stakeholders told us that the adoption of such plans, with their accompanying goals and action items, helped focus national efforts to increase the deployment and adoption of broadband. The following are examples: Japan adopted a plan in 2001 with the goal of providing speeds of up to 30 Mbps to at least 30 million households and speeds of up to 100 Mbps to at least 10 million households by 2005 and achieved this goal by 2003. In 2009, Japan adopted the e-Japan Strategy 2015 and set new target speeds of 1 Gbps for fixed networks, more than 100 Mbps for mobile networks, and 100 percent adoption of broadband services by approximately 2015. In 1997, Canada started the Government On-Line program to organize service and information around the needs of its people and businesses. Since 2002, through such programs as Broadband for Rural and Northern Development (BRAND) and Connecting Canadians, Canada has brought connectivity to rural and remote areas and achieved the goal of connecting public institutions, including schools and libraries, in all of Canada’s 4,000 communities. In 2009, Canada adopted Broadband Canada, a program that will provide $225 million over 3 years to deploy broadband infrastructure to residents in unserved rural and remote areas. In 2009, the United Kingdom issued the Digital Britain plan, which calls for 100 percent availability of a connection capable of download speeds of at least 2 Mbps by 2012. In Sweden, from 2001 to 2007, the government adopted a policy of deploying broadband to rural areas lacking access, and, in 2008, 99 percent of households had access to some form of broadband. In 2009, the Swedish government adopted the Broadband Strategy for Sweden with the goal of ensuring that 90 percent of households have access to broadband speeds of at least 100 Mbps by 2020. In addition to goals, leadership is recognized as important in helping to increase broadband deployment and adoption. In Korea, government officials cited their President’s constant emphasis on broadband initiatives as a factor that has helped to increase broadband adoption. In addition, the country’s ministries emphasize e-government services and often compete with each other to develop new Internet applications. In France, the government created the Office of the Digital Development Minister in March 2008 and made it responsible for crafting a national broadband strategy known as Digital France 2012. The goal of this strategy is to achieve 100 percent broadband access by 2012 and to facilitate coordination among the various ministries with authority over information technology. Case study governments, at the national or regional levels or both, have used public/private partnerships to help fund broadband deployment in unserved and underserved areas. Whereas private enterprises have deployed broadband infrastructure in high-density urban areas where there is a strong business case for such investment, they have independently invested less in low-density rural areas or isolated communities, where deployment costs more per household and offers less opportunity for profit. Officials in both the public and private sectors of several of the countries we visited acknowledged that some areas are unprofitable to serve and some incentive, usually in the form of government funding, is required to motivate private investment and achieve universal access. The public/private partnerships in our case-study countries range from local authorities and private companies that have shared the cost of building a network to municipalities that own broadband networks and contract with private companies to operate and maintain them. The following are examples: Japan’s Ministry of Information and Communications told us that, although 98.6 percent of households have broadband access, the government has instituted a public/private partnership program to support the establishment of broadband infrastructure in rural and remote areas where broadband service is not available and hopes to eliminate all areas without broadband access by the end of March 2011. Under this arrangement, the national government provides one-third of the total cost of installing broadband networks, requiring that the local government formulate plans in collaboration with the private sector and help create demand for broadband. Local governments in Japan maintain ownership of the network and attract the private sector by selecting one company to provide service for the area. From 2001 to 2007, Sweden initiated a broadband funding initiative to expand broadband to rural and remote areas using a public/private partnership model. Financing was provided through state funds, local authorities, and broadband operators, and, in order to participate, a local authority had to provide at least 5 percent of the funding. A government evaluation of the funding program determined that broadband had been deployed to more remote areas than would have received broadband without the funding. In 2006, in order to stimulate economic growth, a large suburb of Paris, France, Hauts-de-Seine, issued a request for proposal; in 2007, Hauts-de- Seine hired a private company to deploy a fiber network to all its residents, enterprises, and public sites within 6 years and to operate the network as a shared fiber network, one open to all competitors. Regional officials told us they entered into this arrangement to prevent the creation of a digital divide, which would have occurred without the involvement of the municipality because no commercial provider was expected to deploy infrastructure equally to all areas, both rich and poor, urban and suburban. Public officials of Hauts-de-Seine told us that the public/private partnership arrangement would optimize the implementation of the network by reducing the cost of deployment of a fully open infrastructure and allowing service providers to increase their customer base. In addition, after 25 years, ownership of the network will revert to Hauts-de- Seine. In Canada, in 2001, the City of Ottawa was amalgamated with several of its surrounding municipalities, and, within the new boundaries, 90 percent of the city’s landmass and 10 percent of its residents were rural. At that time in the rural areas, 2 percent of the residents had access to broadband. To bring broadband to the entire amalgamated area, in 2007, Ottawa entered into a partnership with a private broadband provider. Ottawa issued a request for proposal, set a goal of 100 percent availability, and selected a company that provided both fixed wireless and satellite service. A city official told us that some satellite coverage was necessary because Ottawa’s uneven terrain would have made it too costly to erect enough towers to provide wireless connections for all residents. The city official told us that the private company had given the city more than it had asked for and its bid did not request the maximum contribution from the city. Currently, broadband service is available to 100 percent of the amalgamated area’s residents, with 98 percent of rural areas served by terrestrial wireless and the last 2 percent served by satellite. Adoption rates range from 80 percent in the city to 50 percent in the rural areas. In Korea, officials told us they have used public/private partnerships to help reduce the digital divide between urban and rural areas. For example, rural villages with more than 50 households are receiving broadband service at speeds of up to 50 Mbps from Korea Telecom (KT) in partnership with the Korean government. When KT transitioned from government to private ownership in 2002, it had to commit to providing infrastructure to rural areas. However, since 2005, the government has shared the cost, with KT contributing 50 percent, the central government 25 percent, and the local government 25 percent. Private enterprise has been slow to deploy fiber directly to customers’ premises in several countries. While fiber can provide the highest speeds, it is costly to deploy, and consumer demand for speeds above 50 Mbps is limited. Moreover, some existing DSL and cable networks can provide speeds in excess of 50 Mbps. Nevertheless, some municipalities have determined that fiber is necessary for their future well-being and have decided to deploy it despite private companies’ unwillingness to bear the full investment costs. To finance the deployment of fiber in their area, some of these municipalities have established public/private partnerships and examples of some of these are as follows: Stokab, a municipally-owned fiber network, was founded in 1994. Stokab officials told us that the municipality of Stockholm had determined that fiber appeared to be the most viable technology for the foreseeable future, although the local telephone provider did not express any interest in deploying fiber infrastructure at that time. In addition, city officials told us they knew that if, in the future, multiple companies chose to provide fiber to the city, the streets could be dug up several times, causing disruption and damaging Stockholm’s historic buildings and cobblestone streets. To avoid such a scenario, Stockholm officials set up Stokab, which deploys and maintains the physical infrastructure and leases dark fiber to multiple businesses, which may use the fiber for their own business or to provide service to others. Stokab is thus a wholesaler to other business entities. Stokab officials told us that many municipalities in Sweden have adopted models similar to Stokab. In Amsterdam, the Netherlands, in 2000, broadband service was widely available over cable and telephone lines, but there was no fiber to the home. Officials said they believed fiber would protect the city’s future competitiveness, although commercial companies did not want to invest in fiber at that time. Accordingly, in 2006, Amsterdam formed Glasvezel Amsterdam (GNA) to finance a fiber network in conjunction with private investors to provide broadband services throughout the city. The city is not a majority shareholder in GNA, and it is treated like any other private investor. GNA has deployed infrastructure to multiple dwelling units comprising 43,000 apartments and began a new roll out to another 100,000 homes in 2009. Although public/private partnerships have provided both public and private benefits, they have nevertheless raised some concerns. For example, some providers have expressed reservations about using public funds to support businesses in competition with private enterprise. Two providers told us that they think it is unfair to use public funds to finance wireline broadband to compete with a company providing broadband over a satellite or wireless network in rural areas because there is not enough business in such areas to support one unsubsidized company. In addition, officials at companies in Japan and Canada questioned the sustainability of government-funded projects and expressed concern about who would be responsible for maintaining government-funded infrastructure once the government funding is gone. The European Commission has placed some limitations on the use of public funds to establish businesses in competition with private enterprise. Public officials have also expressed concern about the interoperability of municipal networks and have identified a need to provide some guidance to municipal personnel. Public officials in Sweden, the United Kingdom, and the Netherlands have suggested that uniform standards or some form of guidance from the central or national government would be helpful when localities are forming public/private partnerships to deploy broadband infrastructure. The following are examples: Officials in Sweden told us that although the national government’s provision of funds to municipalities from 2001 through 2007 helped to deploy broadband to rural and remote areas, it also led to a profusion of incompatible networks. If they were to support future efforts, the officials said, they would impose more requirements and draw up standards applicable to all municipal systems. Officials in the United Kingdom told us that the government recognized it would not achieve universal broadband deployment without the cooperation of municipalities, but that municipalities needed guidance on how to set up a municipal broadband network to receive state aid. The government has provided such guidance. Officials in the Netherlands told us that the ministry is publishing guidelines for municipalities that contain best practices to give towns ideas of how to set up and manage broadband networks. In all seven of our case-study countries, from 93.5 percent to 100 percent of households have access to broadband, and those in the urban areas have a choice of at least two broadband providers. In some of the countries we visited, such as Canada and the Netherlands, the two main providers of broadband service for the majority of urban and suburban populations are the telephone company and the cable company, both of which provide service over their own networks. However, in other countries, such as France and Sweden, wireline cable service has not been universally deployed, and there is no cable provider that is competing nationwide with the telephone company. To ensure a national competitive market for wireline broadband services, six of seven countries have increased the level of competition in the provision of wireline broadband service through laws, regulations, or both, which require the incumbent telephone carrier to open its copper networks (the legacy infrastructure used to provide telephone service) and provide access to competitors at wholesale prices. This activity is commonly referred to as “unbundling.” Unbundling has been credited with giving most urban residents in France, the United Kingdom, Sweden, the Netherlands, and Japan a choice of three or more providers. Government officials in some countries told us that requiring companies to unbundle has provided several consumer benefits, such as greater competition, higher speeds, more services, and lower prices. Examples from some of those countries are as follows: Swedish authorities credit network unbundling with relatively low consumer prices and good service quality. Officials in the Netherlands told us that unbundling the local loop has stimulated competition, resulting in the deployment of DSL to more than 99 percent of the country’s households. In the United Kingdom, officials of the Office of Communications (Ofcom), the telecommunications regulator, told us that, since unbundling, at least four additional operators have entered the British broadband market. In Korea, although unbundling has not increased competition, several companies are competing with incumbent providers by building their own networks. One company official attributed the limited success of unbundling in South Korea to difficulties in getting access to the incumbent’s network. Another company official said several competing telecommunications infrastructures had developed because competition for customers is based on speed. If a company is using another company’s network, it cannot provide faster service than the company whose network it is leasing. Consequently, in most urban areas of Korea, residents have a choice of four providers, each of which offers service over its own infrastructure. To further encourage competition and ensure that incumbents do not stifle competition by charging prohibitively high prices for access to their infrastructure, all seven countries also regulate the price the incumbent carrier can charge competitors for network access. The majority of our case-study countries have benefited from requiring the incumbent telecommunications carrier to unbundle its copper telephone lines, but the benefits of fiber unbundling are less clear. Both the Netherlands and Japan have required fiber unbundling, and Great Britain has proposed virtual unbundling of fiber; however, officials in some case- study countries cited concerns about the effect of requiring unbundling, pointing out that overregulation too early in the fiber rollout will hamper investment. Furthermore, industry representatives in Japan told us that although the unbundling of copper lines has increased competition, the unbundling of fiber has had less effect, because of the high cost of accessing a competitor’s fiber network. These industry representatives explained that G-Pon, the most cost-effective and most widely used architecture for deploying fiber, is currently cost-prohibitive to unbundle and technological limitations restrict the profitability of leasing an incumbent’s fiber infrastructure. Representatives of OECD also voiced similar concerns and told us that they have advocated using a network architecture other than G-Pon in order to facilitate competition. Thus, the manner in which fiber is most often deployed could affect future efforts to foster competition over fiber networks. Although from 90 percent to 100 percent of households in all seven of our case-study countries have access to some form of broadband, approximately 30 percent of households do not subscribe to wireline broadband service. Increasing usage is important to policymakers because, as OECD has stated, “Broadband not only plays a critical role in the workings of the economy, it connects consumers, businesses, and governments and facilitates social interaction.” Governments in all seven of our case-study countries have attempted to increase usage through strategies for making broadband services more available and more useful to consumers. Examples are as follows: All seven countries have provided funding to deploy broadband to schools, and some have made computers available to students either free or at low cost. Japan’s Ministry of Education provides one computer per student at the elementary school level. Korea provided free Internet service to all primary, middle, and high schools throughout the country. In the Netherlands, the Ministry of Economic Affairs told us that every town was subsidized in some way, to encourage broadband use in schools and in new buildings. One subsidy was for people to buy personal computers for the home, since the children were learning about the Internet in the schools. In all seven countries, to increase the usefulness of broadband to citizens, governments have made services for citizens available over the Internet, commonly referred to as e-government services. For example, in the United Kingdom, the government is planning to introduce a service, Tell Us Once, that will allow a person to register a birth or death online with just one rather than multiple organizations, as is done currently. In Korea, taxes can be filed online, and the government offers a rebate for using this method of filing. The Netherlands has provided all citizens access to government documents, including tax and social security information. In the United Kingdom, Ofcom established a voluntary code of practice for service providers to give the public information about and create accountability for advertised broadband speeds. Ofcom took this step because consumers were choosing service providers without knowing the capabilities of various Internet speeds, why service speeds were important, or whether they were receiving the advertised speeds they had purchased. All the leading Internet service providers enrolled in the code of practice, and Ofcom is now amending the code so that if a customer gets below a certain estimate of speed, the customer can change providers with no penalty. Ofcom also supported a research program to identify actual broadband speeds and compare the different providers’ speeds and services. Ofcom has published its research and made the results available to the public on its Web site. Korea instituted a voluntary premise certification program to encourage building owners to upgrade their broadband access facilities. Once a building is certified, the owner can display one of four emblems indicating the speed or type of access provided or both, with speeds ranging from 10 Mbps (Class 3) to 1 Gbps (Special Class). Building owners have found that offering faster broadband speeds allows them to charge higher rents. Countries have also funded research to promote the use of broadband. For example, in the Netherlands, the government provided grants for three projects to promote high-speed broadband use to facilitate infrastructure deployment and service. Canada sponsors the Scientific Research and Experimental Development program, which provides federal tax incentives for Canadian businesses to conduct research and development in Canada that will lead to new, technologically advanced products or processes, including broadband technologies. Research in the United States has shown that portions of the population do not use and have not adopted broadband Internet for various reasons, including lack of knowledge, lack of interest, lack of access to a computer, or inability to pay for broadband service. Governments of several of the countries we studied determined that some initiatives are necessary to increase broadband usage among these groups. In South Korea, the government has provided classes to more than 10 million residents, including those living in rural areas, the elderly, and housewives, to make them more comfortable with accessing and using the Internet. The government has also provided Internet service at reduced monthly subscription rates for the economically disadvantaged and offers free Internet access to many rural communities through community access points. The United Kingdom expects to spend £300 million to provide reduced- cost broadband access to low-income subscribers. The Ministry of Economic Affairs in the Netherlands has developed a digital literacy program for the elderly to make them more comfortable with the Internet. From 1998 to 2007, Sweden implemented a measure to increase the availability of personal computers to the home. The program offered a tax deduction to all persons who were gainfully employed, regardless of income, and resulted in purchases of some 2.1 million personal computers. The National Broadband Plan includes over 200 recommendations, which the plan’s executive summary groups into four areas—(1) designing policies to ensure robust competition; (2) managing government assets, such as rights-of-way, to encourage network upgrades; (3) using government funds to help subsidize both deployment in high-cost areas and adoption among low-income groups; and (4) maximizing the benefits of broadband in the sectors government influences significantly, such as education, health care, and government operations. These four areas are not identical to the five types of actions we identified in our case-study countries, but the areas and the types of actions overlap and represent similar approaches to expanding broadband deployment and adoption. In addition, FCC acknowledges that findings from its own international research, conducted in part to implement the Broadband Data Improvement Act, influenced aspects of the plan. Implementing the plan’s recommendations will be challenging, requiring the coordination of multiple public- and private-sector entities. Table 4 compares the five types of actions taken in our case-study countries with the plan’s four areas. Just as the governments of our seven selected countries established plans and policies to guide their efforts to expand broadband deployment and adoption, the National Broadband Plan contains recommendations to FCC, Congress, and federal agencies designed to guide future federal efforts. The plan also calls for a number of actions to facilitate measurement of its effects over time. These actions include collecting more data to support benchmarking against goals and tasking FCC to create a Broadband Performance Dashboard on its Web site to display key indicators aligned with the plan’s long-term goals. The purpose of the dashboard is to promote public understanding of important broadband performance metrics and to clearly communicate the progress and effectiveness of efforts to implement the plan. Specifically, the dashboard is expected to detail the types of metrics that FCC should collect and analyze in order to track progress toward the plan’s goals. Table 5 illustrates the dashboard information for one performance goal set forth by the National Broadband Plan. In addition to plans and policies, senior governmental leadership was important for other countries to achieve or progress toward their broadband goals. Similarly, the National Broadband Plan identifies leadership commitment as a key to its success by recommending that the executive branch create a Broadband Strategy Council consisting of senior White House, National Economic Council, and Office of Management and Budget officials, as well as high-level officials from FCC, NTIA, and other agencies with a role in the plan’s implementation. The recommended council would coordinate and implement the National Broadband Plan’s recommendations across executive branch agencies. In all seven selected countries, governments have provided funding through various mechanisms, such as grants and loans, to help pay for the deployment of infrastructure in areas private enterprise deems unprofitable. Similarly, the National Broadband Plan proposes various national funding strategies and mechanisms that are consistent with a federal role in ensuring equal access to broadband services. For example, to help accelerate the rate of broadband deployment to unserved areas, the plan recommends that Congress consider providing funding to areas where no business case exists for private-sector investment. In all seven of our selected countries, public/private partnerships have helped fund the deployment of broadband infrastructure. These partnerships often help maximize government resources and minimize risk for the private investors. Although the National Broadband Plan recognizes the value of public/private partnerships in efforts to increase adoption, it does not explicitly recommend their use to help fund broadband deployment. However, it does recommend that Congress make clear that tribal, state, regional, and local governments can build broadband networks. Specifically, the plan says that when all other options for meeting residents’ broadband needs are exhausted, it should be clear that local authorities can build broadband networks. Stakeholders from some of our selected countries, as well as in the United States, commented on the advisability of providing some guidance to aid municipalities in forming such partnerships and building broadband networks, although we did not assess the need for such guidance. Each of our case-study countries found that competition had been a key component of increasing innovation and, for several of the countries, reducing prices. Six of our seven case-study countries found that promoting competition by unbundling the telephone networks allowed competitors to provide broadband service using existing DSL technology, often avoiding the need for repeated and costly deployment of additional telephone infrastructure. The National Broadband Plan has also identified competition as a key component, noting that “Competition is crucial for promoting consumer welfare and spurring innovation and investment in broadband access networks. Competition provides consumers the benefits of choice, better service and lower prices.” However, according to FCC, it is unclear whether the broadband “ecosystem” in the United States is competitive, so the government needs to continue to study the current competitive environment and the future implications of the current competition structure in America. To promote competition in the wholesale market, the plan calls for FCC to “comprehensively review its current policies and develop a cohesive and effective approach to advancing competition through its wholesale access policies.” One specific recommendation is for FCC to establish an analytical approach to resolving disputes to ensure that the rates, terms and conditions that incumbent local exchange carriers charge to competitors for special access services are just and reasonable, since the plan recognizes that the adequacy of the existing regulatory regime has been subject to much debate. However, the plan does not recommend that FCC oversee the prices incumbent carriers charge competitors to make certain they are cost-based, as is done by several countries, including the United Kingdom and France. In addition, the plan finds that expanding wireless broadband infrastructure by increasing the availability of wireless spectrum would help spur competition in the United States. Currently, consumers who value high download and upload speeds would not consider wireless broadband to be a substitute for wireline service. However, additional spectrum would make faster download speeds possible, allowing companies to offer wireless services that would compete more effectively with the capabilities of wireline broadband services. All seven of our selected countries have taken actions to increase the number of government services available to the public on the Internet, as has the United States. According to the United Nations (UN), e-government is a powerful tool and essential to the achievement of internationally agreed-upon development goals, including the Millennium Development Goals. In 2010, the United States ranks second, behind South Korea, in advanced e-service delivery, up from fourth place in 2008. In fact, according to the UN, the United States has been a leader in the provision of e-government services. The National Broadband Plan would continue to strengthen this leadership by enhancing the availability and capability of e-government services across the federal government. Specifically, the plan calls for the Office of Science and Technology Policy within the Executive Office of the President to develop a 5-year strategic plan for online service delivery. In addition, to advance the provision of e-government services, the plan includes more than a dozen recommendations aimed at making a wider array of citizen-based services available online to promote the use of digital media content across government. For example, one recommendation calls for executive branch and independent agencies to make all responses to Freedom of Information Act requests available online. Currently, there are no guidelines on the format to be used in responding to such requests. Finally, several of our case-study countries have provided digital literacy training or consumer subsidies or both to increase broadband usage, some targeting certain subgroups, such as the elderly and the poor. Digital literacy generally refers to a variety of skills associated with using information and communications technology (ICT) to find, evaluate, create, and communicate information. It also includes the ability to communicate and collaborate using the Internet—through blogs, self- published documents and presentations, and collaborative social networking platforms. The National Broadband Plan recommends digital literacy training as a means of expanding broadband adoption, pointing out that, according to an FCC survey conducted in 2009, 22 percent of nonadopters in the United States identified lack of digital literacy as a main barrier to adoption, second only to cost. Describing digital literacy as a necessary life skill, much like the ability to read and write, the plan recommends that the federal government create a Digital Literacy Corps to conduct training and outreach. According to the plan, the corps would help nonadopters overcome discomfort with technology and fears of getting online while also helping people become more comfortable with the content and applications relevant to them. To further increase broadband adoption, the National Broadband Plan identifies several options available to the government. For example, to encourage adoption among low-income groups, the plan recommends that FCC expand the Lifeline Assistance (Lifeline) and Link-Up America (Link- Up) programs to make broadband more affordable for low-income households. Currently, Lifeline lowers the cost of monthly service for eligible consumer households by providing support directly to service providers on behalf of those households, and Link-Up provides a one-time discount on the initial installation fee for telephone service but not for broadband. The plan also recommends that FCC consider providing free or very-low-cost wireless broadband service as a means to address or reduce the cost barrier to adoption by offering a band of wireless spectrum dedicated to free or low-cost broadband service as a complement to Lifeline. While the United States plans to take actions similar to those of other leading countries to achieve the National Broadband Plan’s goals of universal access and increased usage and adoption, implementing the plan will be challenging. Action will be required by governments at all levels and the private sector to deploy broadband infrastructure to the last 5 percent of households at a reasonable cost and to promote broadband usage and adoption by increasing digital literacy and making broadband services more affordable for certain populations, especially the elderly and the economically disadvantaged. Furthermore, as the Chairman, FCC, has acknowledged, implementing the plan will require obtaining sufficient funding and coordinating the work of multiple federal, state, local, and private entities, among other actions. It remains to be seen whether and how effectively federal agencies will be able address these challenges and implement the plan’s recommendations, as well as what the private sector will do to further deployment and adoption. We provided a draft of this report to FCC for review and comment. FCC provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Commerce, the Secretary of Agriculture and the Chairman of the Federal Communications Commission. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and major contributors to this report are listed in appendix II. To determine the status of broadband deployment and adoption in developed countries, we reviewed data collected by the Organisation for Economic Co-operation and Development (OECD) for the 30 countries that were members of OECD as of January 1, 2010. Chile has since become a member, but relevant data were not available for our review. Specifically, we considered broadband wireline infrastructure coverage by country, total subscriptions by country, and subscriptions per 100 inhabitants. To understand demographic and socioeconomic factors associated with broadband deployment and adoption, we considered information obtained from several sources, including the World Bank for income levels by country and numbers of personal computers per 100 inhabitants; the Central Intelligence Agency (CIA) World Factbook for population and land mass statistics; and the Federal Communications Commission (FCC) for demographic information on current broadband adoption levels in the United States. For analysis of broadband speeds, we obtained and reviewed data from Akamai Technologies, Inc. For analysis of average broadband prices, we obtained and reviewed data from OECD. We assessed the reliability of OECD and Akamai Technologies, Inc., data by (1) reviewing existing information about the data and the systems that produced them; (2) interviewing agency and company officials knowledgeable about the data; and (3) performing manual testing for missing data, outliers, and obvious errors in required data elements. We determined that these data were sufficiently reliable for the purposes of this report. We assessed the reliability of the World Bank, CIA, and FCC data by (1) reviewing existing information about the data and the systems that produced them and (2) performing manual testing for missing data, outliers, and obvious errors in required data elements. We determined that these data were sufficiently reliable for the purposes of this report. To better understand the status of broadband deployment in the United States, we interviewed relevant federal government officials at FCC, the Department of Commerce’s National Telecommunications and Information Administration (NTIA), and the Department of Agriculture’s Rural Utilities Service (RUS). We also interviewed officials of companies that provide broadband service in multiple states—Verizon, AT&T, Comcast, and Windstream—and representatives of a national consumer welfare organization, Consumers Union. To help inform our analysis of public/private partnerships, we interviewed officials of a public/private partnership in Bristol, Virginia, which was highlighted in the National Broadband Plan, as well as representatives of other public/private partnerships recommended to us. These included a consortium of public/private partnerships in Utah, an official of ECFiber in Burlington, Vermont, and a former Motorola executive working on a public/private partnership in Massachusetts. To better inform our understanding of the deployment of fiber infrastructure, we interviewed representatives of the Fiber-to-the-Home Council. To determine the actions stakeholders in selected countries have taken to increase broadband deployment and adoption in the last decade, we first chose 7 countries for case-study analysis. We limited our potential field of countries to those that were members of OECD and were ranked among the top 20 in broadband subscriptions per 100 inhabitants as of the first quarter of 2009. We used OECD’s list of country rankings as our basis for selecting countries because it is the only annually updated report that offers a comprehensive analysis of data provided by governments. We analyzed the demographic profile of each of these countries, including its land area, population and population density, gross national income (GNI), and actions its government had taken to increase broadband deployment and adoption. Actions taken included, but were not limited to, national broadband plans, broadband deployment plans, specific adoption strategies, and e-government services. We chose countries that were in some way similar to the United States and recognized as being particularly successful in increasing broadband deployment or adoption. To determine if a country’s government had taken action to increase the deployment of broadband infrastructure to rural or underserved areas, we performed a literature search of publicly available government documents, as well as of international documents that provided country-specific information about broadband deployment, including reports from OECD, the European Union, the International Telecommunication Union (ITU), and the World Bank. Furthermore, to understand each country’s broadband adoption strategies, we conducted literature reviews and reports from government agencies and OECD. We also reviewed the United Nations’ (UN) E- Government Survey 2010 to understand and compare OECD countries’ efforts to deliver citizen-based services over the Internet. We assessed the reliability of the UN data by (1) reviewing existing information about the data and the system that produced them and (2) performing manual testing for missing data, outliers, and obvious errors of required data elements. We determined that these data were sufficiently reliable for the purposes of this report. The seven countries we selected for case-study analysis were Canada, France, Japan, the Netherlands, South Korea, Sweden, and the United Kingdom. Before visiting these seven countries, we identified key contacts though research and agency contacts. To learn what actions governments and broadband providers have taken to increase broadband deployment and consumer adoption, and how those actions are viewed by various stakeholders, we visited each of the seven countries, conducted other in- person research, and collected documents. Using a semistructured interview instrument, we obtained information from key contacts in each country (see fig. 3), including government officials, representatives of broadband service providers (both incumbents and competitors), officials of localities involved in providing broadband services through public/private partnerships, and representatives of groups dedicated to protecting consumers. Following our visits to these seven countries, we reviewed and analyzed the information collected, including current policies, plans, and guidance issued by responsible government agencies, regulatory authorities, and broadband providers. To determine how recommendations outlined in the National Broadband Plan reflect the actions of selected countries to increase broadband deployment and adoption, we analyzed the results of our case studies and placed the actions of the 7 countries in five categories. We placed the actions to increase deployment in two categories—(1) instituting plans and policies and (2) providing government funding through public/private partnerships—and the actions to increase adoption in three categories— (3) increasing competition, (4) implementing strategies to increase the usefulness of the Internet to citizens, and (5) providing digital literacy training and consumer subsidies. We then analyzed relevant recommendations outlined in the National Broadband Plan and interviewed relevant individuals at FCC to determine how actions recommended in the plan align with the five identified categories. However, we did not evaluate the potential impact or effectiveness of the recommendations made in the plan. We conducted this performance audit from June 2009 to September 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Dave Sausville, Assistant Director; Pedro Almoguera; Elizabeth Curda; Bess Eisenstadt; Muriel Forster; Dave Hooper; Hannah Laufe; SaraAnn Moessbauer; Josh Ormond; Madhav Panwar; Sandra Sokol; Spencer Tacktill; and Nancy Zearfoss made key contributions to this report. | Increasingly, broadband Internet service is seen as critical to a nation's physical infrastructure and economic growth. Universal access to, and increased use and adoption of, broadband service are policy goals stated in the National Broadband Plan, which the Federal Communications Commission (FCC) released in March 2010. Some recent studies indicate that despite achieving nearly 95 percent broadband deployment and globally competitive adoption rates, the United States has moved from the top to the middle of the international rankings. Other developed countries, which have made universal access and increased adoption priorities, rank higher than the United States in these areas, and their experiences may be of interest to U.S. policymakers. GAO was asked to address (1) the status of broadband deployment and adoption in developed countries, (2) actions selected countries have taken to increase deployment and adoption, and (3) how recommendations in the National Broadband Plan align with the selected countries' actions. GAO analyzed relevant information for 30 developed countries that are members of the Organisation for Economic Cooperation and Development (OECD) and visited 7 of these countries selected for their broadband policies and economic or demographic characteristics. GAO also interviewed public- and private-sector contacts in these countries and FCC officials. FCC provided technical comments on this report. Broadband infrastructure has been widely deployed in developed countries, but broadband adoption rates are more variable because of cost and other factors. In 27 of the 30 OECD countries, including the United States, broadband has been deployed to 90 percent or more of households, regardless of differences in demographic and geographic factors, while broadband adoption rates are affected by factors such as population, cost, and computer ownership. In the United States, which ranks 15th for both deployment and adoption, broadband has been deployed to 95 percent of households, with 26.4 subscribers per 100 inhabitants--above the OECD average of 23.3. To increase broadband deployment adoption, the 7 countries GAO selected--Canada, France, Japan, the Netherlands, South Korea, Sweden, and the United Kingdom--have taken actions that stakeholders in these countries considered effective. GAO placed these actions in five categories--(1) instituting plans and policies (2) providing funds through public/private partnerships, (3) increasing competition, (4) expanding online services, and (5) providing digital literacy training, consumer subsidies, or both. All 7 countries have instituted some type of broadband plan. To help increase deployment in areas private enterprise views as unprofitable, national or regional governments in all 7 countries have used public/private partnerships. To help increase usage and thus expand adoption, all 7 have enacted policies to encourage competition and have increased the number of government services available online. Several countries have also offered training or subsidies, often targeting populations with low adoption rates. The recommendations outlined in the National Broadband Plan reflect actions taken in GAO's 7 selected countries to increase broadband deployment and adoption. The plan contains over 200 recommendations for FCC, other government agencies, and Congress, which the plan's executive summary groups in four broad areas. These four areas are not identical to the five types of actions GAO identified in the selected countries, but both represent similar approaches to expanding broadband deployment and adoption. For example, the plan calls for adopting strategies and long-term goals, while the actions taken by the selected countries include instituting plans that contain strategies and goals. Similarly, the plan advocates policies to promote robust competition, just as the selected countries have taken actions to promote competition. While the United States plans to take actions similar to those of other leading countries to achieve the National Broadband Plan's goals of universal access and increased adoption, achieving these goals will be challenging. Actions will be required by governments at all levels and the private sector. Furthermore, implementing the plan's recommendations will require coordinating the work of multiple stakeholders and obtaining sufficient funding, among other actions. How effectively federal agencies will be able to address these challenges and implement the plan's recommendations, as well as what the private sector will do to further deployment, use and adoption, remains to be seen. |
SSI is an income assistance program for people who are aged, blind, or disabled. It was authorized in 1972 and is administered by SSA. To be eligible for SSI, individuals cannot have income greater than the maximum benefit level (in 1995, $458 per month for an individual and $687 for a couple if both spouses were eligible) or resources worth more than $2,000 ($3,000 for a couple), subject to certain exclusions, such as a home that is the primary residence. In 31 states and the District of Columbia, SSI recipients are automatically eligible for Medicaid without filing a separate application for benefits with the state Medicaid agency. The remaining states may require a separate application for Medicaid benefits or have more restrictive definitions of disability and financial eligibility requirements than SSI. Beginning in 1981, individuals filing SSI claims were prohibited from transferring resources for less than fair market value to qualify for SSI. Under the provision prohibiting such transfers, SSI applicants or recipients who got rid of resources to qualify for SSI had the uncompensated value of those resources counted toward the resource limit for 24 months from the date of transfer. As a result, such individuals were probably ineligible for SSI benefits for 2 years after transferring resources, and, in many cases, they were also ineligible for Medicaid for the same length of time. In 1988, the Congress eliminated the SSI restriction for resource transfers at less than fair market value, allowing individuals who dispose of resources to qualify for benefits. The Congress, however, retained a similar restriction for the transfer of resources by individuals applying for Medicaid nursing home benefits. Under the current Medicaid provision, applicants for Medicaid long-term care benefits who transfer resources at less than fair market value within 3 years of application or within 3 years of entering a nursing home are deemed to be temporarily ineligible for such benefits. Since information on resource transfers is relevant to the Medicaid nursing home eligibility decision, the law requires SSA to ask SSI applicants about resource transfers even though their answers do not affect the determination of their SSI eligibility. SSA is also required to provide this information to state Medicaid agencies. A provision to reinstate a transfer-of-resource restriction for certain transfers was included in welfare reform legislation passed by the 104th Congress, which was subsequently vetoed by the President. SSA is currently considering the merits of reinstating an SSI transfer-of-resource restriction and may include such a proposal in its fiscal year 1997 legislative proposals. Since 1989, the number of SSI recipients reporting nonexcludable resource transfers has substantially increased, from fewer than 500 in 1989 to almost 2,800 in 1994. Between 1988 and 1994, 9,326 recipients reported transferring resources. While the number of recipients reporting resource transfers is relatively small compared with the total number of SSI recipients, it represents a growing population receiving millions of dollars in SSI benefits each year. We analyzed data on those individuals for whom data were maintained centrally in an automated database at SSA headquarters; this represented about one-third of the 9,326 SSI recipients who reported resource transfers, about 3,505 recipients (see app. I for more details). We estimate that between 1990 and 1994 these recipients transferred cars, homes, land, cash, and other resources worth over $74 million. The average value of transferred resources was about $21,000. This recipient group of 3,505 does not include the more than 5,800 transfers documented in nonautomated case files, nor does it include recipients who failed to report resource transfers. Consequently, the total amount of resources transferred is larger than our estimate. Although SSI benefits are for those with limited income and resources, the resources recipients transferred were often of considerable value. These individuals could receive millions of dollars in SSI benefits in the 24 months after they transferred resources. For example, one individual transferred an apartment complex valued at $800,000 to a nonrelative in May 1994. In July 1994, this person applied for SSI and has subsequently received about $6,800 in SSI payments. Another individual gave away about 380 acres of land valued at $100,000 to a relative in September 1993. This person applied for SSI in October 1993 and has received about $4,200 in SSI payments. In many cases individuals applying for SSI benefits reported having transferred large amounts of cash. For example, one individual gave away almost $38,000 in cash to a relative in July 1992 and applied for SSI in August 1993. This person has received about $4,900 in SSI payments. In another case, a person gave away $29,000 to a relative in September 1993 and applied for SSI in the same month. This person has received about $4,300 in SSI payments. Since repeal of the resource transfer restriction in 1988, 9,326 SSI recipients reported transferring resources before applying for or while receiving SSI; however, the actual number of people who did so is unknown. The extent of resource transfers is unknown because field office claims representatives accept self-reported information. If an applicant does not report a transfer, SSA does not verify this information nor is it required to. Consequently, instances in which individuals transfer resources but do not report the transfer are not detected. Moreover, we found cases in which questionable data were accepted by the claims representatives. Although SSA requires an applicant to provide a bill of sale or other documents to establish that the applicant no longer owns the resource, it does not verify the value because resource transfers do not affect the amount of SSI benefits an individual receives. As a result, our estimate of $74.3 million in resource transfers from 1990 to 1994 probably understates the actual value of resources transferred. Some recipients (5.5 percent) reported transferring resources such as homes and other property but reported the value as $0. For example, one individual gave a house and 72 acres of land to a relative and reported a market value of $0. Moreover, 7.4 percent of recipients reported transferring resources without reporting any value for the resources. In addition to those recipients reporting the value of their resources as $0, other recipients apparently reported inaccurate market values of the resources they transferred. For example, an individual gave away 4 acres of land and reported the value as $10. Another individual gave away two homes and reported the total value of the homes as $20. Under the restriction in effect until 1988, resources transferred by individuals were counted as a resource for 2 years after the date of the transfer, making such individuals ineligible for SSI benefits until 24 months elapsed. We estimate that the 3,505 recipients who reported transferring resources between 1990 and 1994 would receive about $7.9 million in SSI benefits during the 24 months following the date the resources were transferred. Assuming that some individuals did not report such transfers, the total amount of benefits paid is likely to be larger than our estimate, which was based on the 3,505 cases. Currently, the period of ineligibility for Medicaid long-term care is based on the value of the resources transferred at less than fair market value. That is, the period of ineligibility is calculated by dividing the uncompensated value of the resource by the average monthly cost of nursing home care in the state where the person lives. We estimate that from 1990 through December 1995 about $14.6 million in SSI program expenditures could have been saved if SSI had in place a transfer-of- resource restriction similar to Medicaid’s provision. For example, if an individual gave away $25,000, under the previous SSI transfer-of-resource restriction, the person would have been ineligible for SSI benefits for 2 years. However, basing the period of ineligibility on the uncompensated value of the resource divided by the maximum SSI payment that can be awarded would have resulted in about 4-1/2 years of ineligibility. Most of the 3,505 recipients who reported transferring resources were, like most SSI recipients, eligible for Medicaid acute-care benefits. In 1994, aged SSI recipients who received Medicaid benefits averaged about $2,800 in benefits, and blind and disabled SSI recipients averaged about $5,300, excluding nursing home and institutional care. An SSI transfer-of-resource restriction could possibly result in savings in the Medicaid program. Some of the individuals denied SSI benefits would not become eligible for Medicaid during the period in which they were ineligible for SSI. We cannot estimate potential Medicaid savings because some individuals denied SSI could possibly receive Medicaid by applying for “medically needy” coverage directly to the state in which they live. SSA estimated that it spent about $600,000 in fiscal year 1995 to obtain transfer-of-resource information. However, virtually all of these costs were related to explaining the provision and asking individuals about resource transfers. SSA incurred little cost to verify the accuracy of reported information. If a restriction were reinstated, SSA would have to substantially expand the effort required to verify the accuracy and completeness of transfer information reported by individuals as well as detect unreported transfers. This is important because individuals may be less likely to report transfers if such transfers affect SSI eligibility. Verifying the accuracy of reported transfer information would be less costly than detecting unreported transfers. Although no data exist to estimate the potential costs of the additional verification and detection requirements that SSA would have to initiate, the costs could be significant. Eliminating the SSI transfer-of-resource restriction has increased SSI benefit expenditures and program costs, which is especially troublesome considering current budgetary constraints. The number of new recipients reporting transfers of resources has increased dramatically since repeal of the restriction. These individuals, who transferred resources that they could have used for self-support, are instead receiving SSI benefits. In addition, many of these individuals, by virtue of their admission to the SSI program, have also become eligible for Medicaid acute-care benefits. An SSI transfer-of-resource restriction similar to Medicaid’s restriction could save millions in SSI program expenditures by delaying individuals’ date of eligibility for benefits. Such a restriction could also save an unknown amount of Medicaid expenditures. If a restriction were reinstated, SSA would have to considerably expand the steps it takes to verify the value of transferred resources as well as develop mechanisms to detect unreported transfers. This is especially important because individuals might be less likely to report transfers once they affected SSI eligibility. As a result, SSA would incur additional administrative expense in implementing such procedures. However, these cost estimates are not readily available and would have to be developed by SSA. Moreover, this use of SSA’s limited resources and the increased administrative costs should be properly balanced with the benefits of bolstering the program’s integrity by assuring the public that people may not rely on public services when they can use their own resources and by guaranteeing that only those who need SSI will receive it. SSA is considering whether to include such a proposal in its fiscal year 1997 budget request. In light of the potential for reduced program expenditures and increased program integrity, the Congress may wish to consider reinstating an SSI transfer-of-resource restriction. The restriction could be calculated in a way that takes into account the value of the resource transferred so that individuals transferring more valuable resources would be ineligible for SSI benefits for longer periods of time than those who transfer less valuable resources. SSA agreed with our findings and conclusions that reinstating a transfer-of- resource restriction would increase the SSI program’s integrity. SSA noted that it is continuing to work with the Congress to include a provision restoring an SSI transfer-of-resource restriction in welfare reform legislation. SSA also stated its concern that our excluding eight cases from our sample significantly understates the number of cases with excludable resources. We excluded cases on the basis of comments in individuals’ files indicating that the resources transferred involved primary residences. Other cases involving transfers of excludable resources may also exist, but SSA could not identify which, if any, involved such resources, and we had no other means to identify those cases. SSA acknowledged that identifying such cases would be difficult since information on many of the transfers would not have been noted in the case files. The agency also made other technical comments, which we incorporated throughout the report as appropriate. (See app. II.) We are sending copies of this report to the Commissioner of the Social Security Administration and other interested parties. Copies also will be available to others on request. If you or your staff have any questions concerning this report, please call me on (202) 512-7215. Other GAO contacts and staff acknowledgments are listed in appendix III. Data on the nature and value of the resources transferred in over half of the reported 9,326 transfers that occurred between 1988 and 1994 were not readily available because the information was not centrally located or contained in an automated database. This information was documented in case files in field offices or other storage facilities. In 1990, however, SSA began using an automated claims process, the Modernized Supplemental Security Income Claims System (MSSICS), to collect and document application information about SSI claimants. These data were centrally located at SSA headquarters and contained relevant automated information on 4,293 individuals who transferred resources. Of these individuals, 3,550 transferred their resources between 1990 and 1994 and received SSI benefits; the other 743 were denied benefits. From the 4,293 individuals, we selected a random sample of 750 individuals whose SSI applications were processed in MSSICS and obtained their transfer-of-resource data. We subsequently found that, of these 750 individuals, only 631 had been determined eligible for SSI; the other 119 were denied benefits. Under SSA operating guidance, field office claims representatives should only collect transfer-of-resource information on countable resources, that is, any assets that count toward the resource limit. The value of resources such as a home that is the primary residence or one automobile is excluded when calculating an individual’s resources. SSA officials expressed concern that some of the homes transferred by SSI recipients included in our sample were in fact primary residences. Because such transfers would not have been penalized under the previous transfer-of-resource restriction, SSA did not believe they should be included in our sample. SSA, however, could not identify which, if any, of the cases involved excludable resources. In response to SSA’s concern, we reviewed our sample and on the basis of comments noted in the cases determined that eight resource transfers may have involved primary residences. We excluded those cases from our sample. As a result, our revised sample size is 623. Although other cases involving potential transfers of excludable resources may be in our sample, comments indicating this were not noted in the individuals’ records, and we had no other available means to identify those cases. We assumed that the proportion of the 3,550 recipients with automated resource data in MSSICS who transferred resources other than primary residences would be the same as the proportion of these individuals in our random sample, 98.73 percent. Thus, we based our estimates on a population of about 3,505 recipients. All of the sampling errors reported below have a confidence level of 95 percent. For estimates of the value of resources transferred when a value was not reported by a recipient, we considered the value of that transfer to be $0. Our estimate of the total value of resources that recipients reported having transferred, $74.3 million, has a sampling error of plus or minus $12.9 million. The estimate of the average value of transferred resources, $21,000, has a sampling error of plus or minus $3,672. For the estimates of proportions in column 2 of table 1, sampling errors do not exceed plus or minus 3 percentage points. In addition, sampling errors associated with estimates of benefits to be received ($7.9 million) and potential program savings ($14.6 million) do not exceed plus or minus $1 million. Since the principal source of our automated data, the Supplemental Security Record (SSR), is subject to periodic SSA quality assurance reviews, we did not independently examine the computer system controls for the SSR. Except for the limitations noted, our review was done between May and December 1995 in accordance with generally accepted government auditing standards. In addition to those named above, the following individuals also made important contributions to this report: Graham D. Rawsthorn, Evaluator; Daniel A. Schwimer, Senior Attorney; Vanessa R. Taylor, Senior Evaluator (Computer Science); Nancy L. Crothers, Communications Analyst; James P. Wright, Assistant Director (Study Design and Data Analysis); and Joel I. Grossman, Social Science Analyst. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Supplemental Security Income (SSI) program, focusing on: (1) the number of SSI recipients reporting resource transfers; (2) the kind and worth of resources being transferred; and (3) the possible savings resulting from a reinstatement of the SSI transfer-of-resources restriction. GAO found that: (1) while the number of SSI recipients reporting resource transfers has increased, they are only a fraction of the total number of SSI recipients; (2) between 1990 and 1994, 3,505 SSI recipients reported transferring resources worth more than $74 million; (3) the value of reported transferred resources varied and the actual extent of resource transfers is unknown because the Social Security Administration (SSA) does not verify or investigate SSI recipients' self-reported information about resource transfers; and (4) reinstating the SSI transfer-of-resource restriction could reduce program costs, reduce Medicaid costs, and increase SSA administrative costs. |
The advent of the Internet and digital transmission of sound recordings through personal computers has revolutionized the music industry and created a new way to transmit music directly to listeners. Although personal computers have been available since the late 1970s and music in digital form since the early 1980s, it was opening up the Internet to commercial activity in 1992 that set the stage for webcasting. In webcasts, sound recordings, such as records and compact discs, and live performances can be transmitted to listeners over the Internet. The popularity of webcasting is growing, with the number of listeners tripling over the past 3 years. Webcasting and traditional radio broadcasting follow essentially the same steps to deliver music to listeners (see figs. 1 and 2). Many webcasters and traditional radio stations deliver music to listeners at no charge. A key difference, however, concerns the number of potential listeners. In traditional radio broadcasting, a station’s signal is available to any number of listeners within range of the transmitter. In contrast, the potential audience for a webcast is anyone in the world whose computer is equipped with a media player. Webcasting, also called Internet streaming, is the process of transmitting digitized audio or video content over the Internet. The content can originate from live performances, records, compact discs, or other prerecorded formats. A webcast consists of several steps. The webcasters must first assemble the music that will be transmitted and then translate it into one or more digital formats. Music that is not streamed “live” must be stored so that it is available to individuals who use their personal computers to access the Web site created by the webcaster. The final step is delivering the music through an Internet connection. Choices about the audio quality of the transmitted music and the size of the audience affect the webcaster’s operation costs. The quality of the resulting music depends on the bandwidth—the number of bits of information transmitted per second—used by the webcaster. Higher bandwidth results in better sound quality of the transmitted music and allows a greater number of simultaneous listeners. The size of the Internet connection to the webcaster’s server and the choice of bandwidth determine the potential size of the audience. Although in its most basic form webcasting can be a relatively inexpensive “do-it-yourself” operation using a minimum of two computers and an Internet connection, the trade-off is lower sound quality and smaller audience size. Alternatively, webcasters that hope to reach a large audience with high-quality music frequently contract with one or more third parties to provide the different steps. Such third parties can provide a single service or some combination of services, including translating the music into digital form and adjusting bandwidth needs to accommodate the number of simultaneous listeners. Some may also provide data on the number and location of listeners. Because webcasters frequently deliver their music at no charge to listeners, webcasters may contract with other third parties, such as companies that wish to advertise products on the webcaster’s Web site, to obtain revenue that can help offset the costs associated with webcasting and return a profit to the webcaster. The Internet and the ability to digitally transmit sound recordings have created opportunities for the recording companies that typically own the copyrights in sound recordings to reach an unprecedented number of listeners. Accompanying these opportunities are challenges for copyright owners to maintain control over, and be compensated for, the use of their copyrighted recordings. In the United States, the owners of copyrights in sound recordings have not historically enjoyed the exclusive right to control or authorize public performances of their recordings. Traditionally these copyright owners generated royalties by selling copies of the recordings in the form of albums, cassette tapes, and compact discs. Although radio broadcasters pay royalties to publishers and writers for use of a musical work, they were not obligated to pay record companies for the use of sound recordings. Two key pieces of legislation gave copyright owners the right to control performances of sound recordings when they are digitally transmitted and gave webcasters the automatic right to use the recordings under certain circumstances in exchange for the payment of royalties under a statutory license. The Digital Performance Right in Sound Recordings Act, enacted in 1995, granted copyright owners the exclusive right to control or authorize the use of recordings when they are digitally transmitted but not, for example, when they are transmitted for use as background music in a restaurant. In the Digital Millennium Copyright Act (DMCA), the Congress expanded the scope of this digital transmission right. Among other things, the DMCA specifies that webcasters may operate under an automatic license to use copyrighted works at either a voluntarily negotiated rate or at a rate recommended by a panel known as a Copyright Arbitration Royalty Panel (CARP), subject to review by the Librarian of Congress. These rates, retroactive to October 1998, were to apply through December 2002. The act called for this procedure to be repeated every 2 years as the webcasting industry developed, though it could be extended by agreement between the copyright owners and webcasters. However, the legislation created conflict between record companies and webcasters. The DMCA provided the opportunity to negotiate royalty rates independently. But after negotiations between owners and webcasters broke down, the Library of Congress convened a CARP to resolve the issue and determine the appropriate rates. The CARP held hearings for 6 months, during which both the copyright owners and webcasters presented their cases. In February 2002, the CARP issued a royalty rate recommendation. In June 2002, the Librarian rejected some of the webcasting rates recommended by the CARP and issued a regulation that set royalty rates for Internet transmissions. Both record companies and webcasters contested the Librarian’s rates and sought relief in the courts. Some small webcasters believed that the rates set by the Librarian were too high, arguing that they would have to close their operations because they could not pay the rates set by the Librarian and that these rates would put an end to the promise of webcasting. Copyright owners believed the rates were too low and did not reflect the true market value of their music, causing them to, in essence, subsidize the webcasters. Moreover, they argued that royalties are simply another cost of doing business, like buying bandwidth, and webcasters that could not afford to pay them should not be operating. In response to these concerns, the Congress passed the Small Webcaster Settlement Act of 2002. The act did not set new royalty rates but instead allowed small webcasters and copyright owners another opportunity to negotiate an agreement on royalty rates for the period beginning October 28, 1998, through December 31, 2004. These negotiated rates were to be based on a percentage of revenue or expenses, or a combination of both; were to include a minimum fee; and were to apply in lieu of rates set by the Librarian of Congress (see table 1). This option was available to any small webcaster that met the agreed-upon eligibility requirements. In December 2002 the U.S. Copyright Office published the resulting agreement under the act. The agreement contained specific guidance for webcasters to follow in determining the specific revenue and expense categories that were to be included in the calculation of royalties due to copyright owners. The guidance defines revenues and expenses in ways compatible with generally accepted accounting principles and income tax reporting. As of October 2003, 35 small webcasters that had elected to follow the royalty rates and terms set out in the agreement were in operation. As shown in figure 3, these webcasters were located throughout the United States, with one in Canada. We interviewed 30 of these webcasters. Rock and pop are the types of music they most often delivered to listeners, although they also webcast rhythm and blues, jazz, “oldies,” and electronic dance music. For 17 of these small webcasters, the targeted audience includes both men and women, while the audience for the remaining 11 is predominately men. Almost all of these small webcasters target listeners between the ages of 25 and 34. Small webcasters have economic arrangements with various third parties, including bandwidth providers, advertisers, and merchandise providers. Other less commonly reported arrangements with third parties included those with companies that help small webcasters manage or obtain advertising, such as companies that insert ads either on the Web site or into the webcast, and companies that sell advertising based on the aggregate audience of multiple webcasters. We determined that the economic arrangements of the small webcasters that elected to follow the terms in the small webcaster agreement and those that elected not to do so were not substantially different. Fifty-two, or 91 percent, of the small webcasters that we interviewed reported having had arrangements with bandwidth providers during the year 2003. In addition, 24 small webcasters said that they had received free bandwidth. However, only 16 of them had received free bandwidth in 2003. Fifteen small webcasters reported that they had received bandwidth at a reduced price at some point, while 14 were receiving it at a reduced price in 2003. Although bandwidth is the dominant cost component for most webcasters, some bandwidth providers offer these incentives as a means to gain business for themselves and to promote the small webcaster market in general. Over half of the small webcasters interviewed had attempted to sell advertising space, either directly or through advertising firms. Of the 40 small webcasters that reported having attempted to sell advertising space, 38 said they were currently running advertising on their stations and 2 stated that they had run advertising in the past, but were no longer doing so. As shown in figure 4, these small webcasters had various methods of selling advertising space. Most reported that the owners or employees of their stations sold advertising space. Other arrangements to sell advertising space, such as through advertising firms or coalitions of webcasters, were less common. Small webcasters use various types of advertising on their sites. Banner ads, which are graphic images that typically appear toward the top of Web pages, were the most common type of advertising used by small webcasters (see fig. 5). Thirty-three of the small webcasters reported that they use banner ads on their sites, and another 2 reported that they used banner ads in the past, but no longer do so. Audio ads, which play at the beginning or during a small webcaster’s stream, were currently being used by 29 of the small webcasters, and another 3 reported having used them in the past. Video ads, which are either shown on the computer screen whenever the listener tunes to the station or during the stream, were less common. Only 9 of the small webcasters reported using video ads, and another 4 said they had used them in the past. Some of the small webcasters reported using some other type of advertising on their sites. Advertising is a primary source of revenue for the small webcasters we interviewed. Twenty-seven of the 58 small webcasters interviewed reported that advertising had provided at least 10 percent of their station’s gross revenue in 2003 (see fig. 6). According to industry analysts and representatives, advertising sales have remained low, in part due to the collapse of the high technology business sector since 2000 and because of the relative novelty of the Internet as an advertising medium. Twelve of the small webcasters interviewed reported that they had received free or reduced-price advertising since 1998. In addition to advertising, other sources of revenue for the small webcasters included donations and merchandise sales. Small webcasters generally did not have arrangements with other third parties, such as merchandisers and ad insertion companies. Twenty-five, or 44 percent, of the small webcasters that we interviewed reported that they had economic arrangements with suppliers of merchandise, such as T-shirts or coffee mugs in 2003 (see fig. 7). This represented an increase of 4 percent from the 1998 through 2002 time period. In addition to selling merchandise on their sites, 15 small webcasters reported that they had received merchandise, such as compact discs and T-shirts, for free or at a reduced price. Thirteen, or 23 percent, of the small webcasters reported that they had economic arrangements in 2003 with ad insertion companies, which sell either the technology for inserting ads into a webcaster’s audio stream or the service of inserting the ads. This technology can help small webcasters target their advertisements to the profiles of their listening audiences as well as provide links to their advertisers’ Web sites. Other types of economic arrangements that were even less common involved coalitions of webcasters and arrangements with aggregators. Twelve, or 21 percent, of the small webcasters reported that they had economic arrangements in 2003 with a coalition of webcasters. Such coalitions have formed to help webcasters market themselves to advertisers. Seven percent of the small webcasters reported that in 2003 they had economic arrangements with companies that sell advertising based on the aggregate audience of multiple webcasters. When an advertiser purchases advertising space from a webcaster, the advertiser is purchasing the chance to present a message to as many listeners as possible. While some webcasters are small and may not have enough listeners to attract advertisers on their own, they have entered into arrangements with companies that sell advertising space based on an aggregate audience of multiple webcasters. Other arrangements included those with parent and sister companies and with corporate sponsors. Data obtained from small webcasters that agreed to the terms of the small webcaster agreement suggest that to date the overall effect of their economic arrangements with third parties on royalties owed to copyright owners has been minimal. Most of these small webcasters owed the minimum royalty fee for either or both of the time periods for which payments were to be made. Because royalty obligations for these webcasters are based on a percentage of their revenues or expenses, or a minimum fee, whichever is greater, accurate reporting is essential to ensure the appropriate payment of royalties. We found only limited evidence to suggest small webcasters might not be doing so. The majority of small webcasters we interviewed that had agreed to the royalty terms in the small webcaster agreement owed royalties equal to the minimum fee because they did not generate revenue or incur expenses sufficient to exceed the thresholds for owing royalties above the minimum fee. Nineteen, or 70 percent, of the 27 small webcasters that provided us with financial information reported revenue and expense estimates that were below the levels that would result in royalty payments above the minimum fee for one or both of the time periods for which payments were to be made—the historical period, which began on October 28, 1998, and ended on December 31, 2002, and 2003 (see table 2). The specific revenue and expense thresholds vary, in part, depending on when the webcaster began operating. The revenue threshold ranged from $25,000 for those that began operating in 2002 to more than $100,000 for those that began in 1998, while the expense threshold ranged from $40,000 to $170,000. For the period 2003 to 2004, the revenue threshold varied in relation to anticipated revenue, with the threshold at $20,000 for those earning less than $50,000 and at $50,000 for those earning more. During the same time period, the expense threshold was $28,571 for those earning less than $50,000 and $71,429 for those earning more. Most small webcasters reported revenues or expenses that were less than half the thresholds required for royalty payments to exceed the minimum fee. Eight small webcasters owed royalties that were based on either revenues or expenses and exceeded the minimum fee. Five owed $3,000 or less above the minimum fee and one owed $5,000 above the minimum fee. The remaining two small webcasters owed more than three times the amount of the minimum fee. These two owed royalties based on their revenues in both time periods. One webcaster attributed much of its revenue to a relationship with an online retailer, while the other received revenue from an Internet service provider that offered its customers the option of including the webcast as an additional service. The specific minimum fee applicable to any individual small webcaster varied in the first period in relation to when it began transmitting, ranging from a low of $500 for a webcaster that operated only in 1998 to $8,500 for one that was operating for all or part of each year from October 28, 1998, through December 2002. The minimum fee for the period 2003 to 2004 varied in relation to anticipated revenue and was $2,000 for small webcasters earning $50,000 or less and $5,000 for those earning more. Reporting revenue and expenses in accordance with the small webcaster agreement is important to help ensure the proper payment of royalties. Under the agreement, all money earned and all expenses incurred, with certain exceptions, are to be reported for purposes of calculating royalties. For example, small webcasters may exclude revenues from the sale of recordings or assets such as land or buildings and such expenses as royalties paid, the cost of recordings used in the webcast, and the value of residential space used in the operation of the webcasting service. Transactions that do not involve the exchange of money but result in the webcaster receiving something of value are to be included in statements of revenues or expenses. The value of the goods or services received is to be included in the small webcaster’s revenue, and any goods or services the small webcaster offered in exchange are to be reported as expenses. For example, if a small webcaster received free bandwidth, the value of that service should be included as revenue. In some cases, small webcasters contract with an advertising firm that forwards a portion of the advertising sales to the small webcaster and retains a portion as commission. In these cases, the small webcaster is to report the money it received as revenue and the portion retained by the advertising firm as an expense. Although the extent to which small webcasters comply with the agreed-upon guidance for reporting revenues and expenses could not be determined without a detailed review of their financial records, small webcasters that elect to follow the terms of the small webcaster agreement subsequently certify that the figures they report to copyright owners are accurate under penalty of law. Copyright owners have the right to initiate a detailed review of financial records to verify the accuracy of the reported figures. However, an attorney representing copyright owners said that, to his knowledge, no such reviews have been conducted. We found limited evidence to suggest that small webcasters might not be reporting revenues and expenses as agreed. Specifically, while 13 of the small webcasters interviewed said they had received goods or services at no charge, 2 reported having no revenue, although they had received free bandwidth. In each case, however, these small webcasters reported revenue and expense estimates that were well below the revenue and expense threshold, and both were subject to the minimum fee for both the period from 1998 to 2002 and for 2003. Although the majority of small webcasters that we interviewed reported revenues and expenses that were substantially below the levels required to pay a royalty above the minimum fee, this may change as the industry matures.Revenues and expenses of small webcasters might increase as they attract more listeners, and advertising opportunities and rates may also increase as the webcasting industry matures and advertisers rely more on the Internet as part of their advertising efforts. Two trends that may affect the amount of royalties that small webcasters may have to pay in the future include growth in audience size and growth in advertising. The number of Americans listening to Internet transmissions nearly tripled between 2000 and 2003, and about 40 percent of Americans have listened to webcasts, including Internet transmissions of over-the-air radio programming, at least once, according to recent reports by an international media and marketing research firm. Industry analysts expect this growth to continue. Small webcasters that we interviewed also reported growth in the sizes of their audiences. Thirty-six, or 76 percent, of the small webcasters that we interviewed that started webcasting before January 2002 said their audience size had increased, although they could not quantify the extent of the increase (see fig. 8). As mentioned earlier, the small webcasters that we interviewed indicated that they depend upon advertising as a primary source of revenue. According to an estimate from one of the reports cited above, if the aggregate webcast audience could be “sold” to advertisers as if it were an over-the-air radio network, it could generate up to $54 million per year in advertising revenue. However, according to industry analysts, webcasters have the potential to increase their advertising revenues over current levels, in part because they have the ability to provide demographic information about their listeners, which allows advertisers to more accurately target advertisements to potential consumers. We will send copies of this report to the appropriate House and Senate committees; interested Members of Congress; the Librarian of Congress; and to the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-3841. Other major contributors to this report are listed in appendix III. As required by the Small Webcaster Settlement Act of 2002, we conducted a study in consultation with officials from the U.S. Copyright Office in the Library of Congress to determine (1) the economic arrangements between small commercial webcasters and third parties and (2) how those arrangements affect royalties due to copyright owners and performers. We consulted officials from the U.S. Copyright Office throughout the course of our work and incorporated the suggestions and comments we obtained into our report as appropriate. To respond to the objectives set out in the act, we met with officials from the U.S. Copyright Office, the Library of Congress, and representatives of organizations that represent copyright owners. In addition, we interviewed staff from businesses that provide advertising and other services to small webcasters and industry analysts. We also reviewed relevant copyright laws, regulations, and articles. To obtain information from small webcasters, we developed a structured interview. We pretested the content and format of this interview with 6 webcasters. During these pretests we asked the small webcasters to assess whether the questions were clear and unbiased and whether the terms were accurate and precise. We made changes to the interview protocol based on pretest results. We conducted the interview via telephone with 58 small webcasters located throughout the country—30 who had agreed to the terms of the agreement reached by copyright owners and small webcasters under the Small Webcaster Settlement Act and 28 who had not. The U.S. Copyright Office is not required to and does not maintain a list of small webcasters. As a result, to identify the universe of small webcasters, we obtained a list from SoundExchange of 35 small webcasters that had elected to follow the terms of the small webcaster agreement. We subsequently learned that one of the webcasters had determined that it was not eligible to follow the terms of the agreement because the station made too much revenue, and a second webcaster operated only as a subscription service. A third webcaster informed us that it had ceased operating in December 2000. SoundExchange later sent us an updated list that included 3 additional small webcasters. Thus, the number of eligible small webcasters that were operating as of October 2003 and had agreed to follow the terms of the agreement was 35. We completed interviews with 30 of these small webcasters for a response rate of 85.7 percent. We also interviewed small webcasters that did not elect to follow the terms of the small webcaster agreement. We obtained a list of 121 names of small webcasters from BRS Media (a private firm that maintains a list of Internet broadcasting firms). To the best of our knowledge, this encompassed all small webcasters operating in the United States. Of these 121 webcasters, 28 were no longer operating or did not appear to meet the definition of an eligible small commercial webcaster, and 4 had signed the small webcaster agreement (and thus were on the list of “signers”). We attempted to reach the remaining 89 small webcasters. Forty-two small webcasters were contacted. Interviews were completed with 28 that met the criteria of “small webcaster.” Fourteen webcasters refused to be interviewed. We were not able to contact the remaining 47 webcasters, although we made repeated attempts and left messages when we could. We did not calculate the response rate for the group of small webcasters that did not sign the agreement because we did not know how many of those not interviewed were eligible small webcasters, and we did not have enough information to reasonably estimate the percentage that might be eligible. To protect the confidentiality of the small webcasters we interviewed, we randomly assigned each an identification number and documented their responses to our interview questions with the identification number. During the interviews, we asked the 58 small webcasters about economic arrangements they had with third parties, whether they were currently receiving or had previously received any free or reduced-price goods or services, and requested estimates of their revenues, expenses, and third party revenues. For those small webcasters that had signed the election form to follow the terms of the small webcaster agreement, we asked for their reasons for doing so. For those that had not signed the election form, we asked for their reasons for not doing so. For many of the questions, we asked small webcasters to provide separate responses for two different periods to correspond with the reporting periods contained in the agreement—the historical period, which began on October 28, 1998, and ended on December 31, 2002, and 2003 to 2004. We asked small webcasters to provide information through the date of our interviews, which were conducted in November and December 2003. We also asked each of the 30 small webcasters we interviewed who had elected to follow the terms of the small webcaster agreement to sign a release form allowing us to obtain access to the financial records they had submitted to SoundExchange. We obtained signed release forms from 25 of the 30 (83.3 percent) small webcasters. A representative of SoundExchange subsequently informed us that it had no financial information for 9 of these 25 small webcasters. We reviewed the information the 16 remaining small webcasters had provided to SoundExchange to determine whether it was comparable to the information they had provided to us. To assess the effect that economic arrangements between small webcasters and third parties have on the royalties due to copyright owners and performers, we used financial information obtained during our interviews with 27 of the 30 small webcasters that elected to follow the terms of the agreement. Three of the 30 small webcasters declined to provide any financial information. We calculated the threshold revenue amounts that each of the 27 small webcasters would have had to exceed to owe more than the minimum royalty fee. These revenue amounts were calculated for both time periods—October 28, 1998 through December 31, 2002, and for 2003—and were based on the length of time the small webcaster had been in operation. We then estimated the amount that each small webcaster owed in royalties for each of the two time periods based on the revenue and expense data that they provided to us. For small webcasters that did not report revenue or expense estimates for the entire year, we used their average monthly revenues to project their yearly gross revenue and/or expenses. These estimated values were compared to the threshold amounts and allowed us to determine whether the small webcasters were subject to royalty payments above the minimum fee. Q1: Number of Webcasters Who Began Operating Current Nonsubscription Webcasting Service in Years 1997 to 2003 (N=30) (N=24) (N=58) Q3: Percentage of Webcasters Streaming One or More Channels (N=30) (N=24) (N=58) Sex of listeners Mostly males (N=28) (N=23) (N=55) Q6: Percentage of Webcasters Who Track Size of Audience (N=30) (N=24) (N=55) Q7: Size of audience. Data are not reported due to unreliability. Q8: Percentage of Webcasters Reporting Increase or Decrease in Size of Audience Since (N=29) (N=24) Q9: Percentage of Webcasters Currently Offering a Subscription (N=29) (N=23) (N=58) Use of advertising Currently use (N=29) (N=24) (N=57) Have never used but intend to use in the future* (N=21) Did not sign small webcaster agreement (N=16) Banner ads Audio ads Video ads Other types of ads (N=40) Banner ads Audio ads Video ads 10.00 Other types of ads 2.50 *This question was asked only of those respondents who answered that they “had never used” that type of advertising. Don’t do and don’t intend to in future (N=20) Did not sign small webcaster agreement (N=16) Sold by an advertising agency Sold by owners or employees of the station Sold by owners or employees of parent company Sold through a coalition of webcasters Other ways (N=39) Sold by an advertising agency Sold by owners or employees of the station Sold by owners or employees of parent company Sold through a coalition of webcasters Other ways 2.56 Q13: Percentage of Webcasters Who Had Economic Transactions With Different Types Type of business Ad insertion company Advertising agency Audience aggregator for advertisers Bandwidth provider (i.e., an ISP) A music content provider A corporate sponsor A coalition of webcasters A parent or sister company Suppliers of merchandise (e.g., T-shirts) Other types of businesses (N=22) (N=21) (N=47) Type of business Ad insertion company Advertising agency Audience aggregator for advertisers Bandwidth provider (i.e., an ISP) A music content provider A corporate sponsor A coalition of webcasters A parent or sister company Suppliers of merchandise (e.g., T-shirts) Other types of businesses (N=29) (N=24) (N=57) 50.88 Q14: Percentage of Webcasters Using Different Methods to Pay for Goods and Services Method of payment Direct payment, that is, with cash or a check Commissions Revenue or profit sharing Barter or other noncash exchange Other ways (N=22) (N=21) (N=47) Method of payment Direct payment, that is, with cash or a check Commissions Revenue or profit sharing Barter or other noncash exchange Other ways (N=29) (N=24) (N=57) 3.51 Q15: Percentage of Webcasters Who Reported Receiving Different Types of Goods and Goods and services Free bandwidth Reduced-price bandwidth Free or reduced-price advertising for your webcasting service Other goods (N=22) (N=21) (N=47) Goods and services Free bandwidth Reduced-price bandwidth Free or reduced-price advertising for your webcasting service Other goods (N=28) (N=24) (N=56) Goods and services Advertising Cash donations Noncash donations Merchandise sales Other sources (N=21) (N=21) (N=46) Goods and services Advertising Cash donations Noncash donations Merchandise sales Other sources (N=28) (N=24) Q17: Webcasters’ Estimates of Their Gross Revenue (N=20) (N=19) (N=41) (N=27) (N=22) Q18: Webcasters’ Estimates of Their Expenses (N=20) (N=18) (N=40) (N=27) (N=21) (N=50) Q.19: Percentage of Webcasters Who Participated in the Negotiations That Led to the Small Webcaster Settlement Act (N=28) (N=24) (N=56) Q20: Percentage of Webcasters Who Elected to Pay Royalties to Performers Under the Terms of the Small Webcaster Agreement (N=28) (N=23) (N=55) Q21: Month and Year Election Forms Submitted to SoundExchange. Data are not reported due to unreliability. Reasons for electing to pay royalties The royalty rates for performers under the small webcaster agreement seemed reasonable. Thought there was no choice but to sign the election form. The implementation of your business plan required the certainty of knowing 63.33 performer royalty obligations. Wanted to take advantage of the “delay of obligation” option, which allows a small webcaster to delay performer royalty payments. Other reasons66.67 *This question was answered only by those respondents who elected to pay royalties under the agreement. Reasons for electing to not pay royalties The performer royalties under the agreement seemed too high. Arranged a private agreement with performers. Not familiar with the process and did not know this was an option. Waiting to see if the rate would change. Only stream music by independent artists and labels. Terms and conditions set by the Library of Congress were more favorable. Other reasons *This question was answered only by those respondents who elected to not pay royalties under the agreement. Q24: Webcasters’ Estimates of Revenues Earned by Third Parties (N=10) (N=13) (N=27) (N=15) (N=15) (N=34) Stephen M. Brown, Jason Jackson, Jonathan McMurray, Lynn Musser, Deborah Ortega, Janice Turner, and Mindi Weisenbloom made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | The emergence of webcasting as a means of transmitting audio and video content over the Internet has led to concerns about copyright protection and the payment of royalties to those who own the recording copyrights. Arriving at an acceptable rate for calculating royalties has been particularly challenging. Under the Small Webcaster Settlement Act of 2002, small commercial webcasters reached an agreement with copyright owners that included the option of paying royalties for the period of October 28, 1998, to December 31, 2004, on the basis of a percentage of their revenues, expenses, a combination of both, or a minimum fee rather than paying the royalty rates set by the Librarian of Congress. During debate on the act, copyright owners raised concerns that small webcasters might have arrangements with other parties, such as advertisers, that could produce revenues or expenses that might not be included in their royalty calculations. In this context, the Congress mandated that GAO, in consultation with the Register of Copyrights, prepare a report on the (1) economic arrangements between small webcasters and third parties and (2) effect of those arrangements on the royalties that small webcasters might owe copyright owners. Small webcasters have a variety of economic arrangements with third parties, the most common being agreements with bandwidth providers and advertisers. Almost all of the webcasters that we interviewed reported arrangements with bandwidth providers, and many reported arrangements with advertisers. Less commonly reported arrangements included those with merchandise suppliers and companies that help small webcasters manage or obtain advertising for their Web sites, such as by inserting ads on the Web site or into the webcast itself or selling advertising based on the aggregate audiences of multiple webcasters. Third-party economic arrangements have had a minimal effect to date on royalties owed by small webcasters to copyright owners. Of the 27 small webcasters we interviewed that had agreed to the terms of the small webcaster agreement and provided us with financial data, 19 reported revenue and expense estimates below the levels that would result in royalty payments greater than the minimum fee. We found limited evidence to suggest that small webcasters may not be reporting revenues and expenses as required by the small webcaster agreement. Specifically, 2 of the 13 small webcasters who reported receiving free or reduced-price items did not report the value of these items as revenue for calculating royalties. However, the data we obtained in our survey may not reflect conditions that could develop as the webcasting industry matures. According to industry analysts, revenues of small webcasters are likely to increase as they attract more listeners and advertisers rely more on the Internet to reach customers. |
In the early 1970s, BIA began giving tribes more training, involvement, and influence in BIA’s budget process, in efforts that evolved into TPA. At that time, according to BIA officials, few tribes were experienced in budgeting or contracting, and most depended on BIA for services. Over the years, tribes have become more experienced and sophisticated in TPA budgeting, are more involved in directly contracting and managing their TPA activities, and have more flexibility in shifting funds between activities within TPA. Since 1991, through amendments to the Indian Self-Determination and Education Assistance Act, 206 tribes have entered into self-governance agreements with the federal government. Under the terms of these agreements, the tribes assume primary responsibility for planning, conducting, and administering programs and services—including those activities funded under TPA. Of the $757 million in TPA funds that the Congress appropriated in fiscal year 1998, about $507 million was for base funding, and about $250 million was for non-base funding. Base funding was distributed in three components: $468 million generally on the basis of historical funding levels, $16 million to supplement funding for “small and needy” tribes, and $23 million in a general funding increase. According to Interior officials, how TPA base funds for tribes were initially determined is not clearly documented, and adjustments may have been made over time in consideration of specific tribal circumstances. While most increases in the TPA budget prior to the 1990s resulted from congressional appropriations for specific tribes, subsequent increases have generally been distributed on a pro rata basis. The $468 million in base funds may be used by tribes for such activities as law enforcement, social services, and adult vocational training. Tribes may move these funds from one TPA activity to another. In 1998, the Congress appropriated TPA funds for BIA to supplement historical distribution levels for “small and needy” tribes; as a result, $16 million in additional base funds was distributed to 292 tribes. The designation “small and needy” was developed by the Joint Tribal/BIA/DOI Advisory Task Force on Bureau of Indian Affairs Reorganization in 1994.The task force recommended that tribes with service populations of less than 1,500 have available minimum levels of TPA base funds—$160,000 in the lower 48 states and $200,000 in Alaska—to allow them to develop basic self-government capacity. Because some small tribes were receiving less than $160,000, the Congress directed BIA to supplement TPA base funds with the 1998 distribution so that each of these tribes would receive $160,000. For fiscal year 1999, BIA has requested an additional $3 million to move the “small and needy” tribes in Alaska closer to the task force-recommended minimum funding level of $200,000. The $23 million general increase in base funds was evenly distributed among BIA’s 12 area offices, as recommended in January 1998 by a special task force assembled under the 1998 Interior Appropriation bill. Each equal portion was subsequently distributed to tribes and BIA offices according to various considerations. For example, the tribes in BIA’s Sacramento area each received an equal share of the area office’s $1.95 million allocation. The tribes in BIA’s Juneau area each received $4,000, and the remainder was distributed on the basis of population and TPA base funding levels. The remaining $250 million is non-base funds and is generally distributed according to specific formulas that consider tribal needs. In general, tribes may not shift these funds to other activities without special authorization. Road maintenance, housing improvement, welfare assistance, and contract support are all included in this category. For example, road maintenance funds are distributed to BIA’s area offices based on factors such as the number of miles and types of roads within each area. Housing improvement funds are distributed to area offices on the basis of an inventory of housing needs that includes such things as the number of units in substandard condition and the number of units needing renovation or replacement. As of March 1998, 95 percent of the $757 million in TPA funds had been distributed among the tribes and BIA offices. Our per capita analysis shows that the distributions ranged from a low of $121 per tribal member within BIA’s Muskogee area to a high of $1,020 within the Portland area. However, according to Interior officials, there are reasons for the differences in TPA distributions and the differences should not all be perceived as inequities. For example, BIA is required to fund law enforcement and detention in states that do not have jurisdiction over crimes occurring on Indian lands, so tribes located in those states may receive more TPA funds for these purposes than tribes located in other states. Similarly, BIA has a trust responsibility for natural resources on reservations, so tribes that have large land bases may receive more TPA funds for this purpose than tribes with small land bases. Furthermore, tribes with self-governance agreements may include funds in their TPA base amount that are not included for tribes without self-governance agreements. BIA officials also noted that they do not consider the service population figures, which are estimated by tribes, to be reliable—although they did not offer other figures that they believed to be more accurate. They also noted that TPA funds are distributed to tribes, rather than individuals, and that a lower per capita figure may reflect that tribes in one area have larger memberships but smaller land bases than tribes in another area. Appendix I presents the distributions and per capita analyses for BIA’s area offices. The remaining 5 percent of TPA funds not distributed to tribes includes $30 million, primarily for welfare assistance and contract support, that will be distributed later in the fiscal year on the basis of tribal need. While most of the contract support and welfare assistance funds are distributed on the basis of the prior year’s expenditures, between 15 and 25 percent is withheld until later in each fiscal year, when tribes’ actual needs are better known. An additional $9 million not distributed to tribes is for other uses, including education funding to non-tribal entities (such as states and public schools) and payments for employees displaced as a result of tribal contracting. Nonfederal entities—including tribes—meeting the federal assistance thresholds for reporting under the Single Audit Act (those receiving at least $100,000 in federal funds before 1997 and those expending at least $300,000 in 1997 or later) must submit an audited general-purpose financial statement and a statement of federal financial assistance. We examined all 326 financial statements on file with Interior that were most recently submitted by tribes; these statements generally covered fiscal years 1995 or 1996. The tribes’ financial statements varied in the type and amount of information reported. While some statements included only federal revenues, others also included revenues from state, local, and private sources; some included financial information only for tribal departments that expended federal funds, while others provided more complete reporting on their financial positions. In total, the statements reported that these tribes received more than $3.6 billion in revenues during the years covered by them. These revenues included such things as taxes and fees, lease and investment income, and funds received through governmental grants and contracts. About half of the financial statements we examined also included some information on tribal businesses. Tribal businesses include, for example, gaming operations, smokeshops or convenience stores, construction companies, and development of natural resources such as minerals or timber. The tribes that reported the results of their businesses had operating income totaling over $1.1 billion. Not all of these tribes reported a profit, however—about 40 percent reported operating losses totaling about $50 million. The reliability of the general-purpose financial statements we reviewed varied. Of the 326 we reviewed, 165—or about half—of the statements were certified by independent auditors as fairly presenting the financial position of the reporting entity and received “unqualified” auditors’ opinions. However, auditors noted that 38 of the “unqualified” statements were limited to certain funds and were not intended to represent the financial position of the tribe as a whole. The independent auditors’ opinions for the remaining financial statements indicated that the statements were deficient to varying degrees. Tribes with gaming operations are required under the Indian Gaming Regulatory Act to submit annual financial reports to the National Indian Gaming Commission. In 1997, we reported that 126 tribes with class II and class III gaming operations (which include bingo, pull-tabs, slot machines, and other casino games) reported a total of about $1.9 billion in net income from their gaming operations in 1995. About 90 percent of the gaming facilities included in that report generated net income, and about 10 percent generated net losses. Because the financial statements we examined covered different fiscal years and did not always include gaming revenues, we did not attempt to reconcile them to information reported to the Gaming Commission. In deciding whether to consider tribal revenues or business income in order to determine the amount of TPA funds tribes should receive, information that might be useful to the Congress could include (1) financial information for all tribes, including those tribes not submitting reports under the Single Audit Act; (2) more complete information on the financial resources available to tribes from tribal businesses, including gaming; and (3) more reliable data on tribes’ financial positions. However, there are several impediments to obtaining this information. For fiscal year 1997 and later, nonfederal entities (including tribes) expending less than $300,000 in federal funds are not covered by the Single Audit Act. Tribes reporting under the act do not have to report financial information for their tribal businesses if those businesses do not receive, manage, or expend federal funds. Interior officials also noted that under the terms of the Alaska Native Claims Settlement Act, Congress established for-profit native corporations as separate legal entities from the non-profit arms that receive federal financial assistance; for this reason, financial information on the for-profit arms would not be reported under the Single Audit Act. Further, financial information submitted by Alaskan villages that have formed an association or consortium or operate under self-governance agreements reflect only the operations of the umbrella organization and do not provide information regarding the separate tribal governments. Interior officials further noted that some tribes that meet the reporting threshold of the act have not submitted financial statements annually as required, or have not submitted them in a timely manner, and that BIA has few sanctions to encourage these tribes to improve their reporting. Finally, the financial statements we examined included a range of auditors’ opinions, and the reliability of the information in the statements varied. Mr. Chairman, this concludes my prepared statement. I will be pleased to respond to any questions that you or Members of the Subcommittee may have. We obtained information about (1) BIA’s bases for distributing 1998 TPA funds; (2) distributions of TPA funds in fiscal year 1998; (3) revenue and business income reported by tribes under the Single Audit Act; and (4) additional revenue and income information that might be useful to the Congress in deciding whether to distribute TPA funds considering total financial resources available to tribes. We contacted officials with the Department of the Interior’s Bureau of Indian Affairs, Office of Audit and Evaluation, and Office of Self-Governance in Washington, D.C., and its Office of Audit and Evaluation in Lakewood, Colorado. We analyzed distribution data provided by BIA and Office of Self-Governance officials to determine specific amounts distributed to area offices and tribes in fiscal year 1998. We did not independently verify the distribution or population data. At Interior’s Office of Audit and Evaluation in Washington, D.C. and Lakewood, Colorado, we examined all 326 of the most recent financial statements on file that were submitted under the Single Audit Act by tribes, tribal associations, and tribal enterprises. We excluded statements for some entities, such as tribal housing authorities and community colleges, because they are financially separate from the tribes. Of the 326 financial statements, 290 were for federally recognized tribes, 20 were for tribal businesses or components of tribes, 14 were for consortia or associations representing over 170 individual tribes, and 2 were for tribes not federally recognized. From each of the financial statements we examined, we obtained information about the independent auditor’s opinion, revenues for all fund types reported, and operating income for tribes that included tribal business information in their statements. We performed our review from November 1997 through April 1998 in accordance with generally accepted government auditing standards. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the preliminary results of its review of the Bureau of Indian Affairs' (BIA) distribution of Tribal Priority Allocation (TPA)--or TPA--funds, focusing on: (1) BIA's basis for distributing 1998 TPA funds; (2) total distributions of TPA funds in fiscal year (FY) 1998 and a per-capita analysis of those distributions; (3) revenue and business income information reported by tribes under the Single Audit Act; and (4) what additional revenue and income information may be useful to Congress in deciding whether to distribute TPA funds to tribes. GAO noted that: (1) two-thirds of the 1998 TPA funds were distributed primarily on the basis of historical levels, and tribes may shift these base funds among TPA activities according to their needs; (2) the remaining one-third, known as non-base funds, are used for such activities as road maintenance and housing improvement and were generally distributed on the basis of specific formulas; (3) in total, 95 percent of the TPA funds appropriated in FY 1998 have been distributed; (4) average TPA distributions varied widely among BIA's 12 area offices when analyzed and compared on a per-capita basis; (5) the per-capita averages ranged from $121 per tribal member with BIA's Muskogee area to $1,020 per tribal member within BIA's Portland area; (6) according to Interior officials, there are reasons for differences in TPA distributions, and they do not consider the population estimates to be reliable; (7) nonfederal entities--including tribes--meeting certain federal assistance thresholds must submit audited financial statements annually under the Single Audit Act; (8) GAO reviewed all 326 financial statements on file with the Department of the Interior that were most recently submitted by tribes; the statements generally covered fiscal years 1995 or 1996; (9) while some tribes reported only their federal revenues, others included revenues from state, local and private sources; (10) in total, the statements reported that these tribes received more than $3.6 billion in revenues during the years covered by them; (11) these revenues included such things as taxes and fees, lease and investment income, funds received through governmental grants and contracts; (12) some tribes also reported income from their businesses for the periods covered by the statements; (13) however, the quality of the information reported in the statements varied; only about half of the statements received unqualified opinions from auditors, while the others were deficient to varying degrees; (14) in deciding whether to consider tribal revenues or business income in distributing TPA funds, information that might be useful to Congress could include more complete and reliable financial information for all tribes; (15) however, there are several impediments to obtaining this information; and (16) for example, under the Single Audit Act, financial statements must be submitted by those nonfederal entities expending at least $300,000 of federal funds in a year and may not include income from tribes' businesses. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.